Monday, September 21, 2009

Replication technology with Exchange 2010

This topic is related to log file copying and seeding between active and passive databases in Exchange Server 2010. Will Exchange Server 2010 offer changes or improvements in the way log file copying and seeding occurs with Local Continuous Replication (LCR), Cluster Continuous Replication (CCR) and Standby Continuous Replication (SCR) in Exchange Server 2007?

Although the asynchronous replication technology used in Exchange 2007 works quite well, that doesn't mean it can't be improved, right? Exchange Product Group has made several interesting changes and improvements to the asynchronous replication technology with Exchange 2010.

In Exchange 2007, the Microsoft Exchange Replication Service copies log files to the passive database copy (LCR), passive cluster node (CCR) or SCR target over Server Message Block (SMB), which means you need to open port 445 in any firewalls between the CCR cluster nodes (typically when deploying multisite CCR clusters) and/or SCR source and targets. Those of you who work for or with a large enterprise organization know that convincing network administrators to open port 445/TCP between two datacenters a far from a trivial exercise. With the Exchange 2010 DAG feature, the asynchronous replication technology no longer relies on SMB. Exchange 2010 uses TCP/IP for log file copying and seeding and, even better, it provides the option of specifying which port you want to use for log file replication. By default, DAG uses port 64327, but you can specify another port if required. For this, use the following command:

Set-DatabaseAvailabilityGroup -identity -ReplicationPort

In addition, the Exchange 2010 DAG feature supports the use of encryption whereas log files in Exchange 2007 are copied over an unencrypted channel unless IPsec has been configured. More specifically, DAG leverages the encryption capabilities of Windows Server 2008—that is, DAG uses Kerberos authentication between each Mailbox server member of the respective DAG. Network encryption is a property of the DAG itself, not the DAG network. Settings for a DAG's network encryption property are: Disabled (network encryption not in use), Enabled (network encryption enabled for seeding and replication on all networks existing in a DAG), InterSubnetOnly (the default setting meaning network encryption in use on DAG networks on the same subnet), and SeedOnly (network encryption in use for seeding on all networks in a DAG). You can enable network encryption using the Set-DatabaseAvailabilityGroup cmdlet. For instance, if you wanted to enable encryption for log copying and seeding, you would execute the command:

Set-DatabaseAvailabilityGroup -identity -NetworkEncryption Enabled

Finally, with Exchange 2010 DAGs you can enable compression for seeding and replication over one or more networks in a DAG. This is a property of the DAG itself, not a DAG network. The default setting is InterSubnetOnly and has the same settings available as those of the network encryption property. To enable network compression for log file copying and seeding on all networks in a DAG, use the command: Set-DatabaseAvailabilityGroup –Identity -NetworkCompression Enabled. To find the status of the port, encryption and compression settings for a DAG, use the Get-DatabaseAvailabilityGroup –status command.

Question on Multi-subnet clusters and using static routes with Exchange 2007 CCR on Windows Server 2008

Q. We are deploying a multisite Exchange 2007 SP1 cluster using Cluster Continuous Replication (CCR). The two cluster nodes will be located in separate datacenters. Exchange runs on Windows Server 2008 SP2 and we plan to have the public and private interfaces located in different subnets in each datacenter. As you know, this means we must use routing between the cluster nodes.
We have no problem configuring the public interface according to the instructions in "". But when we configure the default gateway on the private interface, we receive the warning message shown in Figure below:

Based on this warning message, we suspect things will not work properly if we specify multiple default gateways on each node in our multisite CCR cluster. This leads us to our question: How should we configure the private network interface in this type of scenario?

A. Because specifying multiple default gateways on a multisite CCR cluster will cause major issues. The proper configuration requires persistent, static routes for each private interface.

To get started, make sure the public interface is listed first on the connection order list under Advanced Settings in the Network Connections control panel. Next, make sure you have specified a default gateway on the public network interface for each cluster node.

Finally, configure routes on the private interfaces so that all traffic that doesn't match the route created will use the default gateway of the public interface

The –P parameter specifies that the created routes are persistent and won't be cleared after a reboot. This configuration will ensure proper networking for each interface in the cluster nodes. Its recommended to configure the private network as a mixed network so that the Enable-ContinuousReplicationHostName cmdlet can be used to direct replication activity over the redundant network.

With the enhancements in Windows 2008 to allow for multi-subnet clustering it is becoming more common to see this utilized with Exchange 2007 SP1 installations.

When implementing a clustered solution, it is a requirement that there be a minimum of two interfaces on each node, and that each node can maintain communications across those interfaces. Two different fashions to implement this requirement with multi-subnet clusters:
  • The “public” interface of each node resides in different subnets with the “private” interfaces residing in a stretched subnet.
  • The “public” interface of each node resides in different subnets with the “private” interfaces also residing in different subnets.
For users that have a configuration where both network interfaces are in different subnets this will generally require routing between those two subnets. A common mis-configuration that I see in this design is the use of default gateways on both of these network interfaces.

When a user attempts to configure two network interfaces each with a default gateway, the error in above screenshot is noted from the operating system.

The text in this message is specifically important as it highlights at this time that this configuration will not produce the desired results.

The most likely cluster configuration where Exchange is used, with this type of clustering, is cluster continuous replication (CCR). When multiple default gateways are defined, users may see inconsistent results in the performance and ability to replicate logs between the nodes. The replication issues between nodes are also exacerbated when continuous replication hostnames are used utilizing the secondary networks with the default gateway assigned. These issues are secondary to any issues that the cluster service many have maintaining communications between the nodes and any communications issues clients may have connecting to the nodes.

If the default gateways are removed from the “private” adapters, reliable routed communications can only occur over the “public” interface. So…if two default gateways cannot be used, how should we ensure proper communications over both the “public” interface and “private” interface where both reside in different routed subnets.

The first part of this solution is to ensure that the binding order of the network interfaces is set correctly in the operating system. To confirm the binding order:
  • Open the network connections control panel.
  • Choose the advanced menu (if menu is disabled, enable it by selecting Organize –> Layout –> Menu Bar).
  • Select advanced settings from the advanced menu.
  • On the adapters and bindings tab, ensure that the “public” interface is first in the list, with all secondary interfaces following after.
The second part of the solution is to maintain the default gateway on the “public” interface.

The third part of the solution is to enable persistent static routes on the “private” interfaces. In terms of the routes we simple need to configure routes to other “private” networks using gateway addresses that have the ability to route between those “private” networks. All other traffic not matching this route should be handled by the default gateway of the “public” adapter.

Let’s take a look at an example.

I desire to have a two node Exchange 2007 SP1 CCR cluster on Windows 2008 with each node residing in a different subnet.

Node A:

  • IP Address
  • Subnet Mask
  • Default Gateway
  • IP Address
  • Subnet Mask
  • Gateway on network
Node B:

  • IP Address
  • Subnet Mask
  • Default Gateway
  • IP Address
  • Subnet Mask
  • Gateway on network
(Note that gateway on network is not the default gateway setting but is the gateway on the private interface network that can route packets to the private network on the other nodes.)

In this case I would want to establish the necessary persistent static routes on each node. In order to accomplish this, I can use the route add command. The structure of the route command:

NodeA: Route add mask –p

NodeB: Route add mask –p

The –p switch will ensure that the routes are persistent lasting after a reboot. Failure to use the –p will result in the routes being removed post a reboot operation.

You can verify that the routes are correct by running route print and reviewing the persistent route information.

By utilizing only a default gateway on the “public” adapter, and static routes on the “private” adapters, you can ensure safe routed paths for client communications, cluster communications, and replication service log shipping.

Exchange 2010 HA - Database Availability Group (DAG)

Database Availability Group (DAG)

Database Availability Group (DAG) is the new high availability solution in Exchange 2010. None of the clustering technologies in 2007 is valid in 2010.

DAG is a collection of upto 16 mailbox servers with a maximum of 16 copies of each database. It makes the database independent of any server and gives us failover on a mailbox level rather than a hardware or storage group level as in 2007. To illustrate it, if one database gets corrupted or the disk having the files fails, you can quickly mount a copy of the same database in any of the servers which is part of the DAG.

Once first node becomes a part of a DAG, the failover clustering quorum settings will be set to “Node Majority”. Once the second node is added, the quorum setting is automatically changed to “Node and file share majority”. The file share witness is only created once the second server is added to the DAG as the quorum setting is only changed at that time.

Log shipping no longer uses SMB; instead it uses the ESE streaming API for seeding, which is considerably more efficient, and raw TCP sockets for replication. In Exchange 2007, there was one SMB session for all databases on a server. In Exchange 2010, there's one TCP socket per database, so scalability and parallelization are greatly improved.

This provides HA for systems that are built on top of DAS; in fact, it's optimized for DAS. You can use dedicated storage per node; replication means that you can use JBODs without even using RAID.

DAGs can span AD sites, subnets, and so on (although all servers in the DAG must be in the same AD domain). You can control and throttle DAG replication at the network level or using the DAG controls for log lag.

The setup experience is completely different than SCC. To enable a DAG, you create a DAG and then add database replicas to it. You don't have to manually create any of the failover mechanisms, install any Windows prerequisites, or any of the stuff you'd have to do with single-copy clusters (SCC).

Public folders: no changes, except that you can no longer use continuous replication for public folders. You can put a PF database on a server that's in a DAG, but you can't put the PF database itself into the DAG. Because Exchange 2007 limited you to having a single PF database per CCR-protected storage group, this isn't actually a loss.

DAG Information

When an administrator creates a DAG, it is initially empty, and an object is created in Active Directory that represents the DAG. The directory object is used to store relevant information about the DAG, such as server membership information. When an administrator adds the first server to a DAG, a failover cluster is automatically created for the DAG. DAGs use a subset of Windows Failover Clustering technologies, namely, the cluster heartbeat, cluster networks, and the cluster database (for storing data that changes or can change quickly such as database mount status, replication status, and last mounted location).

- File Share Witness (FSW) is configured for the cluster, but must be server outside the DAG.

- Although a Windows Failover Cluster is created, no cluster resources are created and all DAG administration is managed from Exchange. Failover management is also managed entirely within Exchange.

- Replication of database copies, and failover of those database copies, can only be with servers which are members of the same DAG.

Mailbox Servers

- In Exchange 2007 database server either hosted only active or passive copies of a database. In Exchange 2010, a server within a DAG can hold both Active and passive copies of databases, so the mailbox server needs to service both of these types of databases.

- Executes Store services on active mailbox database copies.

- Executes Replication services on passive mailbox database copies.

- Active definition of health – Is Information Store capable of providing email service against it?

- Passive definition of health – Is Replication Service able to copy logs and play them into the passive copy?

- Each server can host up to 100 database copies.

Because DAGs rely on Windows Failover Clustering, they can only be created on Exchange 2010 Enterprise Edition Mailbox servers that are running Windows Server 2008 Enterprise or Windows Server 2008 Datacenter. In addition, each Mailbox server in the DAG must have at least two network interface cards in order to be supported.

Mailbox Database

A failover or switchover occurs at the database level. Since a failover now only involves a database, rather than an entire server, the failover time has been reduced from around 2 mins to 30 seconds which considerably improves the client experience.

Database names for Exchange 2010 must be unique within the Exchange organization, as these are now Organization-wide objects rather than being tied to a server. Within a DAG a database may have a copy on any server within the DAG.

When a mailbox database has been configured with one or more database copies, the full path for all database copies must be identical on all Mailbox servers that host a copy.

Mailbox Database Copy

Only one copy of a database can be active with a DAG


Continuous replication has the following basic steps: Database seeding; Log copying; Log inspection; Log replay

Exchange Server 2007 utilized SMB and notifications to get logs. Exchange Server 2010 utilizes TCP sockets and notifications to the source about which logs are required on the target.

Exchange 2010 supports options encryption and compression of the logs. These features are set at the Database Availability Group Level.

After the log files have been inspected, they are placed within the log directory so that they can be replayed in the database copy. Before the Replication service replays the log files, it performs a series of validation tests.

Once these validation checks have been completed, the Replication service will replay the log iteration.

How to provision a Domain Controller as File Share Witness for an Exchange 2010 DAG

One of the most attractive features (my opinion) of Exchange 2010 is the ability to provide high-availability for all 4 server roles by using just 2 actual machines. There are definitely some caveats, like the fact that you need a hardware load-balancer to distribute inbound request to the CAS and HT roles, but I think we’ll see a lot more adoption of high-availability using a collocated CAS/HT/MBX in the smaller or midsize business space with this model.

The 2010 DAG feature is similar to 2007’s CCR in that it requires a file share witness. Since your two Exchange servers are part of the DAG, neither can actually be a witness. In this case you’ll need a 3rd server to act as the file share witness which would normally be another Exchange server, but again, we only have two here. In my lab the only other server I had was a domain controller so I decided to use that as my FSW instead of standing up another server. When I ran through DAG wizard I received the following errors.

Warning: Specified witness server DC.DELHI.ORG is not an Exchange server, or part of the Exchange Servers security group.

Warning: Insufficient permissions to access file shares on witness server ‘DC.DELHI.ORG’. Until this problem is corrected, the database availability group may be more vulnerable to failures. You can use the Set-DatabaseAvailabilityGroup cmdlet to try the operation again. Error: Access is denied

The DAG is still created, but really doesn’t have the FSW ability at this point. I recommend deleting it. The first warning is also a little misinformative because the problem actually lies in the Exchange Trusted Subssytem group permissions, not the Exchange Servers security group. You can follow these steps to get your DC to act as FSW:

Add your domain controller’s computer account to Exchange Trusted Subsystem group in AD.

Add the Exchange Trusted Subsystem group to the Builtin\Administrators group of the domain.

Obviously the second change isn’t ideal and if you’re going to use the DAG features I’d really recommend putting your FSW folder on something other than a DC, but it’s necessary in this case.

At this point go ahead and try to create your DAG again. This time it should succeed.

I also found that if I created the folder on the DC ahead of time and then ran the DAG wizard it would fail because the folder and share permissions were not correct. The best action here is to not create the FSW folder or share ahead of time and just let the cmd-let take care of the hard work.

Database Availability Group (DAG) in Exchange 2010

The new concept of the Database Availability Group (DAG) is exciting Exchange 2010 technology to bring low cost high availability without costly hardware SAN infrastructure.

Microsoft Exchange Server 2010 clients will connect to Client Access Servers, which will proxy the requests to the client. No more LCR, SCR, or CCR…DAG (or Super CCR) uses low cost DAS storage to leverage a “Raid 5” striping of databases to multiple servers. Client Access Servers (set in load balanced server farms), will provide primary HTTP and a new “distributed RPC endpoint” for Office 2010, Office 2007, Office 2003 emulation of a “standard exchange mailbox server” without needing to upgrade the clients.

Since clients connect to the CAS servers to proxy requests to the mailbox servers, failover from mailbox server to another in the DAG happen in less than 30 seconds in a failover or move command.

Some other notable highlights in Exchange 2010 database and HA architecture:

- Replication between databases will change from being a RPC method, to a TCP socket method which will increase performance on heavily loaded servers.
- Replication can be locally or remote (cross-subnet). You will need CAS servers at the DR site however if you lose the primary datacenter.
- You can have upto 16 mailbox servers in a DAG.
- There will be no integration with Microsoft Online at the DAG level. Microsoft Online cannot be used as DR site for a on-premise hosted mailbox. Either it’s on-premise or hosted, not a mixture of the two.
- You still need Windows Server 2008 Enterprise, as failover clustering feature is required.
- The concept of Storage groups is depreciated.
- Jet is still the storage engine for Exchange 2010 databases.
- Exchange IO has been reduced 50% from 2007 to 2010 (and already a 70% IO reduction from Exchange 2003 to 2007).
- Single Instance Storage is going away, as well as the per database table. A new table is created for each mailbox, creating the scenario for 10,000+ messages in mailboxes due to the sequential read capability.
- Server based PST files allow archiving with anywhere access. Help for e-discovery, OWA searches, and compliance management.

Public folders are not covered by the new DAG changes, and the only way to replicate Public Folders in Exchange 2010 is using the same 10 year old Public folder replication methods we have used for years. SCR replication of the public folder database for DR scenarios, possible in Exchange 2007, is depreciated in Exchange 2010. Also, clients will continue to connect to public folder on mailbox servers in the DAG directly. Public Folders will not take part in the new Client Access Server 2010 model that is introduced with Exchange 2010 mailbox databases. Public folders are a legacy platform and significant changes won’t be introduced.

What has been removed?

No more EVS/CMS
Database is no longer associated to a Server but is an Org Level resource
There is no longer a requirement to choose Cluster or Non Cluster at installation, an Exchange 2010 server can move in and out of a DAG as needed
The limitation of only hosting the mailbox role on a clustered Exchange server
Storage Groups have been removed from Exchange

Is anything the same?

1. Window Enterprise Edition is still required since a DAG still uses pieces of Windows Failover Clustering

What’s New?

1. Other roles can be installed on the mailbox server when it is a member of a DAG
2. A database name must be unique in the Exchange Org

DAG is a collection of Exchange 2010 Mailbox Servers (maximum 16) who help monitor and protect mailbox databases defined within the DAG. Gone are LCR and SCC and still there and evolved are SCR and CCR both now known as DAG.

One major change when it comes to clustering is that you can have any combination of Exchange 2010 roles on your server. So if you want to create an Exchange 2010 cluster based on 2 Servers you could do so if you want. Even the UM role is fine with it. The Edge Transport role is the only exception to this rule but so it also lives it´s own life in the DMZ.

Another cool feature of DAG is that you don´t have to create Windows Failover Cluster, WFC, beforehand. This also means that you don´t risk ending up with a hard coded cluster node but instead you´re free to remove it from the DAG and end up with a regular Mailbox Server role.

When you add the first Exchange Server to the DAG it senses if there is an underlying cluster or not. If it doesn´t find any cluster it automatically installs and configures WFC for you.

One interesting thing to notice is that every time the number of Mailbox Servers within the DAG makes an even number (2,4,6,..) the File Share Witness, FSW, is in the mix within the cluster but as soon as you have an odd number of servers it´s gone. Well, it isn´t deleted from the hard drive, just not a part of the cluster anymore. Its not that FSW would appear after the very first Server added to the DAG. When a DAG is formed, it will initially use the Node Majority quorum mode. When the second Mailbox server is added to the DAG, the quorum is automatically changed to a Node and File Share Majority quorum model. When this change occurs, the DAG will begin using the witness server for maintaining quorum. If the witness directory does not exist, Exchange will automatically create it, and provision it with full control permissions for local administrators and the cluster network object (CNO) computer account for the DAG.

For a CLI style, follow these steps:

1. Install your forthcoming nodes as you would any other Exchange 2010 Server

2. Then set your cluster NICs on their own subnet and all other things we normally configure on cluster NICs: Recommended private "Heartbeat" configuration on a cluster server -

3. Create the DAG
Example: New-DatabaseAvailabilityGroup -Name DAG1 -FileShareWitnessShare \\EXHUB1\DAG1FSW -FileShareWitnessDirectory C:\DAG1FSW

4. Create the Networks for the DAG. Minimum of 2;

one public for client connections
Example: New-DatabaseAvailabilityGroupNetwork -DatabaseAvailabilityGroup DAG1 -Name DAGPUB -Description "Public Client traffic network" -Subnets -ReplicationEnabled:$False

and one for dedicated replication traffic.
Example: New-DatabaseAvailabilityGroupNetwork -DatabaseAvailabilityGroup DAG1 -Name DAGCLU -Description "Replication network" -Subnets -ReplicationEnabled:$True

5. Then add your nodes:
Add-DatabaseAvailabilityGroupServer -Identity DAG1 -MailboxServer EXMBX1 -DatabaseAvailablityGroupIpAddresses

Remember that you have to do this on the nodes directly if WFC isn´t already on the machine since it isn´t possible to install FWC remotely. BUT if you already have installed WFC AND you have cluster admin tool installed on the machine you´re sitting at, then you can perform this from that machine.

Also remember to add -DatabaseAvailablityGroupIpAddresses if you don´t have any DHCP or if you want to staticly set an IP Address to the DAG.

6. Add the database that DAG should protect:
Add-MailboxDatabaseCopy -Identity MBXDB1 -MailboxServer EXMBX3 -ReplayLagTime 00:10:00 -TruncationLagTime 00:15:00 -ActivationPreference 2
Now you have your very own DAG running!

Basic test of functionality: Try a Switchover (basically what you do when performing maintenance on the Server holding the currently active database copy): Move-ActiveMailboxDatabase "Executive Database" -ActivateOnServer EXMBX3 -MountDialOverride:None.

Sunday, September 13, 2009

China is now buying US distressed real estate (better than & not riskier like US Treasuries) - standing taller & a threat

Speaking of China's current account surplus, the dragon nation now seems all set to take advantage of the cheap real estate available in the US currently. As per a Moneynews report, China's US$ 300 bn sovereign-wealth fund (SWF) is preparing to aggressively scoop up distressed real estate assets in the US, a smart move considering that with the impending hyper inflation in the US, real estate surely looks a better investment than US Treasuries.

Interestingly, the vice president of the Bank of China has recently been quite vocal in his criticism of Wall Street for its complacency in the wake of the financial crisis. He is known to have said recently, "You go to Wall Street, the people feel the crisis never happened. It's not only overconfidence, it's over-myopic. This is too much." Looks like Wall Street's arrogance isn't going to take a lot of time to come back!

And now, another American bastion has fallen. As per Moneynews, such has been the damage done to corporate earnings in the US following the crisis that for the first time, profits posted by the top 500 Chinese companies have gone way ahead of their US counterparts. Net profits for the Chinese companies stood at US$ 171 bn in 2008, significantly higher than those posted by the US companies, which came in at US$ 99 bn. A big achievement for China indeed as while it remains the fastest growing economy in the world; its GDP is still less than a third of the US.

And once again, it is the banking industry that seemed to have made all the difference. While many of the US banks remained mired in losses, half of the top 10 profit-makers on the Chinese list turned out to be financial companies. Clearly, the US' banking industry has left its economy nowhere to hide.

Lehman Brothers' obituary - Is it time to change the idiom from 'Too Big To Fail' to 'Too Big May Fail'?

The month of September marks an anniversary that markets would love to forget. The bankruptcy of Lehman Brothers that snowballed into a global financial catastrophe had several lessons to teach.

Almost years after this crisis, the US and most large economies in Europe have managed to recover from recession. Global and particularly emerging markets have again limped back to optimism. But does this mean that we have left behind the risks that surfaced a year ago? While the massive correction in real estate prices has certainly tempered greed, there is a collective opinion amongst bankers that bailing out large entities at the cost of taxpayer money is not in the best interest of the nation. The G-20 finance misters have a clear resolve to ensure that entities that put the interest of the economy at risk for their vested interests will be brought to task.

Saturday, September 5, 2009

If you have more than one public folder database in your organization, do NOT put it on a CCR cluster

Microsoft has long stated that locating a public folder database on a CCR cluster in an organization where there is more than public folder database is no supported. As with many things that Microsoft says are unsupported but that actually works, I took this to be more of a guideline than a hard and fast rule.

In one of my Exchange environments, we have a lot of regional offices with between 1,000 and 3,000 users. In the 1,000 user location, we did not want to invest additional hardware in a dedicated public folder server so we threw the public folder database on the CCR cluster. During testing that we move the clustered mailbox server (CMS) from node to the other, the public folder database moved just fine.

You guessed it, the active node failed a few weeks ago and the public folder database did not remount. We could not get it to mount at all. Period. End of story. Kaput. Dead database. I had a PSS engineer sitting right next to me and he could not rescue it either. Exchange Server 2007 SP1 is apparently hard coded not to allow the database to be recovered, even if you accept a lossy failover.

So, the moral of the story. If you have more than one public folder database in your organization, do NOT put it on a CCR cluster.

In previous versions of Exchange Server, Exchange Virtual Servers (EVSes) are not very different from standalone servers. Besides mailboxes, they can host protocol virtual servers (SMTP, IMAP4, POP3, HTTP/OWA), Public Folders, etc.

Exchange Server 2007's clustering model is simplified further to provide high availability for mailboxes. There is no protocol support - SMTP is the domain of Hub Transport servers, IMAP4, POP3 and HTTP (OWA, Outlook Anywhere or RPC over HTTP, Exchange ActiveSync) are the responsibility of Client Access Server role. Unlike standalone/non-clustered Exchange Server 2007 servers, Clustered Mailbox Servers (CMS - the Exchange 2007 term for EVS) do not co-exist with any other server role.

Clustered Mailbox Servers can host Public Folders, but there are some caveats. The Public Folder Store hosted by the CMS should be the only Public Folder Store in the Organization. If you have Public Folder Stores on other Exchange servers in the Organization, the Public Folder Store on a Clustered Mailbox Server will fail to mount in the case of an unscheduled failover, until the original server and all transaction logs for the Storage Group hosting the Public Folder Store are available.

This is documented at in Exchange Server 2007 documentation.

Public Folders have their own high-availability mechanism built-in, and it's been around for a long time. It's Public Folder replication. Clustered Mailbox Servers (using Cluster Continuous Replication) are not good candidates for replication.

Cluster Continuous Replication and Public Folder Databases

CCR and public folder replication are two very different forms of replication built into Exchange. Due to interoperability limitations between continuous replication and public folder replication, if more than one Mailbox server in the Exchange organization has a public folder database, public folder replication is enabled and public folder databases should not be hosted in CCR environments.

The following are the recommended configurations for using public folder databases and CCR in your Exchange organization:

  • If you have a single Mailbox server in your Exchange organization and that Mailbox server is a clustered mailbox server in a CCR environment, the Mailbox server can host a public folder database. In this configuration, there is a single public folder database in the Exchange organization. Thus, public folder replication is disabled. In this scenario, public folder database redundancy is achieved using CCR; CCR maintains two copies of your public folder database.
  • If you have multiple Mailbox servers you can host a public folder database in a CCR environment provided that there is only one public folder database in the entire Exchange organization. In this scenario, public folder database redundancy is also achieved by using CCR. In this configuration, there is a single public folder database in the Exchange organization. Thus, public folder replication is disabled.
  • If you are migrating public folder data into a CCR environment, you can use public folder replication to move the contents of a public folder database from a stand-alone Mailbox server or a clustered mailbox server in an SCC to a clustered mailbox server in a CCR environment. After you create the public folder database in a CCR environment, the additional public folder databases should only be present until your public folder data has fully replicated to the CCR environment. When replication has completed successfully, all public folder databases outside of the CCR environment should be removed, and you should not host any other public folder databases in the Exchange organization.
  • If you are migrating public folder data out of a CCR environment, you can use public folder replication to move the contents of a public folder database from a clustered mailbox server in a CCR environment to a stand-alone Mailbox server or a clustered mailbox server in an SCC. After you create the additional public folder database outside of the CCR environment, the public folder database in the CCR environment should only be present until your public folder data has fully replicated to the additional public folder databases. When replication has completed successfully, all public folder databases inside of all CCR environments should be removed and all subsequent public folder databases should not be hosted in storage groups that are enabled for continuous replication.

During any period where more than one public folder database exists in the Exchange organization and one or more public folder databases are hosted in a CCR environment (such as the migration scenarios described previously), consider the differences in behavior for scheduled (Lossless) and unscheduled (lossy) outages:

  • If a successful scheduled Lossless outage occurs, the public folder database will come online and public folder replication should continue as expected.
  • If an unscheduled outage occurs, the public folder database will not come online until the original server is available and all logs for the storage group hosting the public folder database are available. If any data is lost as a result of the outage, CCR will not allow the public folder database to come online when public folder replication is enabled. In this event, the original node must be brought online to ensure no data loss, or the public folder database must be re-created on the clustered mailbox server in the CCR environment and its content must be recovered using public folder replication from public folder databases that are outside the CCR environment.
So, the moral of the story. If you have more than one public folder database in your organization, do NOT put it on a CCR cluster.