Replication

Data Protection includes integrated WAN optimized replication. BrickStor supports block and file level replication. Only the changed data is transmitted to shorten replication windows and reduce bandwidth usage. BrickStor replication supports bandwidth throttling. BrickStor Replication supports pause and resume as well as resume from bookmarks when interrupted by network outages and disruption. BrickStor supports block level replication to other BrickStor devices as well as file replication to any NAS or qualified S3 object storage. RackTop’s data replication and backup capabilities enable customers to take advantage of a hybrid cloud strategy and use the cloud provider of their choice.

Replication Best Practices

  1. When setting up replication, especially for larger data sets where data is being written, snapshots should be set to run more frequently than you may run them during normal operation. Each snapshot becomes a replication job, and since more frequent snapshots will be smaller, there is less likely to be a failure to replicate due to network errors or latency. Any replication retransmits are also more likely to be successful.

  2. In cases where an encrypted data set is being replicated, keys should be exported from the local BrickStor and imported on the remote BrickStor so that the data can be recovered there.

  3. Use the advanced configuration parameters to optimize your replication:

    • Priorities can be set to determine which data sets will replicate first

    • Bandwidth throttling can be configured to optimize how much bandwidth is used and at what times of day, so that you can take advantage of low traffic periods and avoid high traffic periods.

    • Optimize snapshot retention periods on both ends

    • On the local system, make sure that snapshots are not aging out before they are replicated.

    • On the remote system, you may want longer retention periods, but this will also consume storage, so consider this balance.

  4. Replication peers should be on an appropriate data network that will be available and not interfere with other network traffic.

  5. Setup bsradm notify for snapshot reporting so that you can be sure your replications are successful.

Understanding Peers

BrickStor supports block replication between two or more pools within the same system or across systems. To set up replication between two systems you must establish a peer relationship with the target system from the origin system. Once the peer relationship is created you can set up replication between pools on a per data set basis.

Configuring a Peer Relationship

To configure a peer relationship, complete the following steps:

  1. In the Connections pane, select the appliance level.

  2. In the details pane, click the Data Protection tab.

  3. Click on the Add Peer Button at the bottom left of the details pane.

    If peers already exist on your system, you can click the Add Peer icon next to the Replication Peers label.

    re add peer

  4. In the Add Peer dialog box, enter an IP address or hostname for the desired peer.

    Replication 2.0 now supports replicating to an HA cluster through the resource group. This will allow replication to continue operating even after a fail over. You only need to peer once to the resource group; The BrickStor OS will coordinate sharing keys between the cluster nodes. If you are replicating to an HA cluster be sure to use the destination resource group’s address (VNIC) in this step.
  5. Enter the username and password for the desired peer.

  6. Click Add Peer.

    The added peer appears under the Replication Peers label. The new peer will remain greyed out until you have added a target to that peer. You must repeat this process in order to replicate in the reverse direction on the other host.

Understanding Peer Status

The following table describes peer status messages that you may encounter.

Table 1. Peer Status
This…​ Means the peer is…​

re healthy no backlog

Healthy No Backlog

re peer configured no replication target

Configured without replication targets enabled for Peer

re peer has problem

Unreachable and has a Problem, such as the target pool is not imported and will show up as [unk] or the target pool is out of space.

Data Protection Replication

Data will be replicated to the target pool under the Replication Container. Through the GUI the source Hostname and IP will be visible along with the original dataset name. However, this information is stored in file system metadata on the replication target so it will not match the exact path name if an admin is browsing the file system on the pool.

Data Replication Hierarchy on File System

  • Pool name

    • global

      • replication

        • Serial number of source BrickStor

          • GUID of source dataset

Data Protection Policy Configurations

re data protection policy configurations

Data Replication Priorities

Each replicated dataset has a priority assigned to it. The priority determines the order that replicated datasets are sent. The possible priorities are:

  1. Critical

  2. High

  3. Medium

  4. Low

  5. None

Critical priority datasets are always sent before datasets of any other priority. Datasets with a priority of None are always sent after any datasets of any other priority have been sent. For High, Medium, and Low priority datasets, the order chosen depends on a combination of factors such as:

  • The amount of data to transfer

  • The success of past replication attempts of this dataset

The replication priority is combined with these factors to determine a 'fair' replication order to allow all datasets to make progress replicating (when possible). A consequence of this is that a High replication cannot indefinitely preempt replication of a Medium or Low priority dataset. Likewise, a Medium priority dataset cannot indefintely preempt replication of a Low priority dataset.

Configure the Data Protection Policy for a Storage Profile

Managing Replication Details

You can manage replication details for a peer from the Replication Details page, to include:

  • Set replication window settings for bandwidth throttling and peak business hours

  • View and configure replication targets

  • Enable/Disable targets

  • Set inheritance (whether to inherit replication parameters from the parent)

  • View timing and transfer status

  • Export a replication report

  • Show the history of replication jobs by clicking the Open History button

Accessing the Replication Details page

Clicking on a Peer’s IP address will navigate you to the replication details page.

re replication details page

Replication Transfer History

You can view the details of transfers. This list can be filtered and exported. Details include:

  • Time

  • Duration

  • Source / Destination

  • Size

  • Speed

  • Success Status

re peer replication transfers

Auto Snapshot Data Protection

Within the selected data set, click on the Auto Snapshot Data Protection tab. You can set a custom profile protection policy under the Auto Snapshot Creation section and filter as needed.

re autosnap creation

You can also choose whether to have the same or alternate retention under Auto Replicated Snapshots. To the right is the Auto Snapshot Compliance area, which includes the number of snapshots retained and desired, as well as the latest snapshot and next snapshot time for all rolling, interval, weekly, monthly, and yearly snapshots.

re autosnap compliance

The snapshot stats display shows the count, users, latest and oldest snapshots, max expiration, holds, and user holds.

re autosnap stats

Further to the right, the Snapshot Indexing area displays the following information and allows the user to toggle the on and off switch for indexing snapshots. It also gives the option to regenerate the index, which will prompt the user with a time consumption warning.

re autosnap indexing

Further to the right under reports, click Auto Snapshot Creation: Policies. Here, you can set the minimum and maximum policy by selecting them with the toggle button. Once selected, the user can filter and add the needed specifications. There will also be an alert if too many or too few snapshots are selected.

re autosnap policies