Upgrading

The following topics explain how to upgrade the BrickStor SP for single-node configurations.

Upgrading to 23.6 will result in all TDM enabled datasets to be in an error state and inaccessible until the manual conversion is done. See Upgrading TDM Datasets section to find out if this affects you and how to perform this conversion.

Upgrading a Single Node BrickStor using the latest BrickStor SP Manager

The following steps demonstrate the upgrade process for a single, standalone BrickStor SP configuration.

1. Beginning the upgrade

up step1

  • Connect the BrickStor SP Manager to your appliance.

  • Choose Upgrade OS / Manage Versions to perform the upgrade.

2. Download the new OS version

  • Choose the version to download by clicking the Download link.

up get step2

up get step3

up trust image

  • Click "Yes".

3. Activating OS Version

up get step 4 public

  • Once downloaded, click the "play" icon to activate at next boot.

4. Commit the OS Upgrade Change

up get step5

  • Commit the change in the Changes pane.

5. Reboot the System

up get step6

The BrickStor SP appliance will now reboot into the new version of the OS. After it does so, navigate to its IP address or hostname in a web browser and log in. You will be asked to review and accept the Terms & Conditions before proceeding. Once you have done that, you will be able to download the new version of the BrickStor SP Manager.

Post-Upgrade Tasks

Once you are connected to your BrickStor SP system using the new version of the BrickStor SP Manager, be sure to do the following:

  • Reconfigure any SMTP email settings.

  • Review and configure any desired report settings.

  • Review the rest of this documentation for new features that you may wish to configure or activate.

BrickStor SP Cluster Upgrade

BrickStor SP Upgrade Prerequisites

When upgrading BrickStor SP it is always recommended to ensure that any encryption keys are exported and backed up. Follow these steps if using encrypted datasets or have encrypted storage pools.

  • Navigate to the Encryption Tab.

  • Click the Resync Encryption Keys with Peers Button.

  • Click the Export All Encryption Keys button.

  • A prompt denoting the entrance of a password will present. Once entered, click the Export All Keys button.

This password must be recorded in a password manager or printed. If lost, the exported key data will be unusable.
  • A Windows File Explorer window will present a default file name.

  • Click the Save button.

  • A prompt denoting successful key export will present showing the Encrypted Key File, as well as the Key Report File.

BrickStor SP OS Upgrade Procedure

The following steps will outline the process by which the BrickStor SP Manager and OS is updated (for the purposes of this example, the example cluster consists of a Node A, Node B, and a Witness. Node A is considered the Passive Node, and Node B is considered the Active Node. If the system is running two Active Nodes, consider Node A to be the Active Node carrying the lower serving load):

If at any point the upgrade process is inhibited, contact the support team.
Upgrade Node A
  1. To begin the upgrade process of the BrickStor SP Manager, first navigate to the System tab of Node A.

    • In the Systems tab, click the Upgrade OS/ Manage Upgrade Versions button. This will take you to the OS Upgrade screen (shown below).

New OS Available

  • In the OS Upgrade Screen, navigate to the new version (in this case, 23.4).

  • Click the Download icon to the right of the release version of the desired upgrade (shown below).

Download New Release

  • A prompt displaying the downloading of the release version will present, as well as a progress bar.

  • Once download is complete, click Activate.

  • Navigate to the System tab.

    • A message stating that a Different OS will run on next boot will present (shown below).

Different OS

  • Click Reboot.

  • A window will present on the right-side of the screen showing the active changes to the system. This will display the changes that will occur to the system when rebooting.

  • Click the checkbox to acknowledge the warning.

    • Click Commit (1) Change(s).

  • A prompt will ask if you want to migrate resources and disable node

OS Reboot Process

  • Click Yes.

    • Once the node has rebooted, ensure that it is enabled.

Node A must be manually re-enabled before upgrading Node B by clicking the play button next to Node A on the HA tab in the BrickStor SP Manager.
  • Verify this via the navigation to Node B.

    • Click the HA tab.

    • Ensure that HA is enabled.

    • Exit the running instance of the BrickStor SP Manager Client.

Upgrade Node B
  1. Repeat steps a - e on Node B to upgrade the second node.

  2. Navigate to the BrickStor SP web interface.

    • Entering the IP of the BrickStor SP Node A into an internet browser search bar.

    • Log in to the website with the admin Username and Password of Node A.

    • Download and install the standalone BrickStor SP Manager client.

    • From the Witness system, download the High Availability Witness Binaries (this will be used in the Witness Upgrade Procedure and Confd Upgrade Procedure).

Download New Binaries

  1. Launch the standalone BrickStor SP Manager client (downloaded in step 3c).

    • The BrickStor SP Manager will automatically load the credentials of the system.

    • Select Node A, verify that the cluster is running (the homepage will display that the HA system requires an upgrade).

Witness Installation Procedure (Windows)

The following steps will outline the process to upgrade the Witness:

  1. Log in as administrator.

  2. Navigate to Windows Services and locate RackTop High Availability Service.

    • Right-click on RackTop High Availability Service and click Stop to stop the service from running.

  3. Navigate to the location of the downloaded .zip file in the Windows File Explorer.

  4. Extract the .zip file using default system processes.

  5. Once located, right-click on hiavd.exe and click Copy.

  6. Navigate to the following location of HA:

    • This will be in either c:\racktop or C:\Program Files\Racktop\BrickStor\

  7. Once the correct folder has been located and entered, right-click. Click Paste.

  8. Navigate to Windows Services.

    • Refresh the list of services.

    • Locate RackTop High Availability Service.

    • Right-click RackTop High Availability Service.

    • Click Run.

  9. On the BrickStor SP Manager, click the refresh button on the top right of the screen to ensure the Witness has been upgraded (The HA tab will display green LEDs, and the warning message denoting a version mismatch will disappear within 30 seconds).

Witness Upgrade Procedure (Windows)

The following steps will outline the process to upgrade the Witness:

  1. Log in as administrator.

  2. Navigate to Windows Services and locate RackTop High Availability Service.

    • Right-click on RackTop High Availability Service and click Stop to stop the service from running.

  3. Navigate to the location of the downloaded .zip file in the Windows File Explorer.

  4. Extract the .zip file using default system processes.

  5. Once located, right-click on hiavd.exe and click Copy.

  6. Navigate to the location of the outdated hiavd.exe on the system.

    • This will be in either c:\racktop or C:\Program Files\Racktop\BrickStor\

  7. Locate hiavd.exe and right-click it.

  8. Click Paste.

  9. Confirm the replacement of the file.

  10. Navigate to Windows Services.

    • Refresh the list of services.

    • Locate RackTop High Availability Service.

    • Right-click RackTop High Availability Service.

    • Click Run.

  11. On the BrickStor SP Manager, click the refresh button on the top right of the screen to ensure the Witness has been upgraded (The HA tab will display green LEDs, and the warning message denoting a version mismatch will disappear within 30 seconds).

Confd Installation Procedure (Windows)

The following steps will outline the process to upgrade confd:

  1. Navigate to the location of the downloaded confd.exe file in the Windows File Explorer (the same directory as the hiavd.exe file).

  2. Once located, right-click on confd.exe and click Run As Administrator.

  3. A command prompt window will present.

    • Enter 1 to install. Press Return.

    • Enter 0 for instance number. Press Return.

    • Enter y to confirm the installation. Press Return.

    • Enter y as response to backup query. Press Return.

    • Enter y to start the confd service after installation. Press Return.

    • Press Return to exit and close the window.

  4. Navigate to the Windows File Explorer and locate the new confadm.exe.

    • Right-click the confadm.exe file.

    • Click Copy.

  5. Navigate to C:\Program Files\RackTop\BrickStor\confd\00.

    • Right-click.

    • Click Paste.

  6. Navigate to Windows Services.

    • Refresh the list of services.

    • Verify the new confd service is running.

  7. Open a command prompt and cd to C:\Program Files\RackTop\BrickStor\confd\00.

  8. Enter confadm member show status to confirm the cluster is healthy by assessing that all three nodes are online and communicating.

Confd Upgrade Procedure (Windows)

The following steps will outline the process to upgrade confd:

  1. Navigate to the location of the downloaded confd.exe file in the Windows File Explorer (the same directory as the hiavd.exe file).

  2. Once located, right-click on confd.exe and click Run As Administrator.

  3. A command prompt window will present.

    • Enter 1 to install. Press Return.

    • Enter 0 for instance number. Press Return.

    • Enter y to confirm the installation. Press Return.

    • Enter y to confirm the update. Press Return.

    • Enter y as response to backup query. Press Return.

    • Enter y to start the confd service after installation. Press Return.

    • Press Return to exit and close the window.

  4. Navigate to the Windows File Explorer and locate the new confadm.exe.

    • Right-click the confadm.exe file.

    • Click Copy.

  5. Navigate to C:\Program Files\RackTop\BrickStor\confd\00.

    • Right-click the existing confadm.exe.

    • Click Paste.

  6. Navigate to Windows Services.

    • Refresh the list of services.

    • Verify the new confd service is running.

  7. Open a command prompt and cd to C:\Program Files\RackTop\BrickStor\confd\00.

  8. Enter confadm member show status to confirm the cluster is healthy by assessing that all three nodes are online and communicating.

Linux Configuration

The following steps will outline the procedure for configuring the BrickStor SP on a Linux system.

  • With an open terminal, enter the following:

    • $ sudo yum install bzip2 ipmitool -y

  • The following will output:

CentOS Stream 8 - BaseOS                                                                                              6.1 MB/s |  28 MB     00:04
CentOS Stream 8 - Extras                                                                                               47 kB/s |  18 kB     00:00
Package bzip2-1.0.6-26.el8.x86_64 is already installed.
Dependencies resolved.
======================================================================================================================================================
Package                           Architecture                    Version                                   Repository                          Size
======================================================================================================================================================
Installing:
ipmitool                          x86_64                          1.8.18-18.el8                             appstream                          395 k Transaction Summary
======================================================================================================================================================
Install  1 Package Total download size: 395 k
Installed size: 1.1 M
Downloading Packages:
ipmitool-1.8.18-18.el8.x86_64.rpm                                                                                     118 kB/s | 395 kB     00:03
------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                 112 kB/s | 395 kB     00:03
warning: /var/cache/dnf/appstream-773ef6463612e8e2/packages/ipmitool-1.8.18-18.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
CentOS Stream 8 - AppStream                                                                                           1.6 MB/s | 1.6 kB     00:00
Importing GPG key 0x8483C65D:
Userid     : "CentOS (CentOS Official Signing Key) <security@centos.org>"
Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                              1/1
  Installing       : ipmitool-1.8.18-18.el8.x86_64                                                                                                1/1
  Running scriptlet: ipmitool-1.8.18-18.el8.x86_64                                                                                                1/1
  Verifying        : ipmitool-1.8.18-18.el8.x86_64                                                                                                1/1 Installed:
  ipmitool-1.8.18-18.el8.x86_64                                                                                                                        Complete!
  • Next, enter the following:

$ sudo vi /etc/selinux/config
$ sudo reboot
  • The following will output:

Connection to xx.x.xx.xxx closed by remote host.
Connection to xx.x.xx.xxx closed.
The system will now reboot.
  • Following the system reboot, download the rpm bundled with your OS.

  • Copy the downloaded .rpm and paste it into the /tmp directory.

  • With an open terminal instance, enter the following:

$ sudo su
# cd /tmp
# rpm -ivh ha-witness-23.5.0RC.50-1.el7.x86_64.rpm
  • The following will output:

Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:ha-witness-23.5.0RC.50-1.el7     ################################# [100%]
[root@localhost tmp]# reboot
Connection to xx.x.xx.xxx closed by remote host.
Connection to xx.x.xx.xxx closed.
  • Enter the following:

    • systemctl status hiavd confd

hiavd.service - BrickStor High Availability Service
   Loaded: loaded (/usr/lib/systemd/system/hiavd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2023-03-15 13:10:44 EDT; 3min 28s ago
Main PID: 1021 (hiavd)
    Tasks: 8 (limit: 100954)
   Memory: 28.0M
   CGroup: /system.slice/hiavd.service
           └─1021 /usr/racktop/lib/hiavd

Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Info] Service info: hiavd 23.5.0RC.50. Copyright 2022 RackTop Systems, Inc.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Debug] Changed state to INITIALIZING.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Info] No existing configuration found in /etc/racktop/hiavd/hiavd.conf; using defaults.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Trace] Single instance; creating lock file /var/run/racktop/hiavd.pid.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Trace] Service hiavd locked with /var/run/racktop/hiavd.pid.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Debug] Changed state to STARTING.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Trace] Initializing channel for signals.
Mar 15 13:10:45 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:45Z [Info] Starting HTTPS server :4746 ...
Mar 15 13:10:45 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:45Z [Debug] Changed state to RUNNING.
Mar 15 13:10:45 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:45Z [Trace] Tracked go routine "license check" started.

confd.service - RackTop Configuration Database
   Loaded: loaded (/usr/lib/systemd/system/confd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2023-03-15 13:10:44 EDT; 3min 28s ago
Main PID: 1027 (confd)
    Tasks: 8 (limit: 100954)
   Memory: 45.3M
   CGroup: /system.slice/confd.service
           └─1027 /usr/racktop/lib/confd Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Debug] Endpoint _rpc.time available.
Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Trace] Tracked go routine "health node status" started.
Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Trace] Tracked go routine "health etcd engine" started.
Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Trace] Tracked go routine "health cluster alarm" started.
Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Trace] Tracked go routine "health cluster config" started.
Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Trace] Tracked go routine "db space check" started.
Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Debug] Changed state to RUNNING.
Mar 15 13:10:46 localhost.localdomain confd[1027]: 2023-03-15T17:10:46Z [Trace] Tracked go routine "license check" started.
Mar 15 13:11:16 localhost.localdomain confd[1027]: 2023-03-15T17:11:16Z [Info] Node e4aa56b6 (4f2be2ea60df7e9b) state is online; https://127.0.0.1:2379.
Mar 15 13:11:16 localhost.localdomain confd[1027]: 2023-03-15T17:11:16Z [Warn] Health Sensor: Cluster not configured
This output will verify that services are online.
  • Enter the following:

    • firewall-cmd --list-all

  • The following will output:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens192
  sources:
  services: cockpit confd dhcpv6-client hiavd ssh
  ports:
  protocols:
  forward: no
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
  • Enter the following, ensuring success is returned for each:

    • firewall-cmd --permanent --zone=public --add-port=4746/tcp

    • firewall-cmd --permanent --zone=public --add-port=2380/tcp

    • firewall-cmd --reload

  • Now, enter the following command:

    • firewall-cmd --list-all

  • The following will output:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens192
  sources:
  services: cockpit confd dhcpv6-client hiavd ssh
  ports: 4746/tcp 2380/tcp
  protocols:
  forward: no
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
  • Next, enter the following command:

    • # systemctl daemon-reload

  • Move to the sbin folder by entering the following:

    • # cd /usr/racktop/sbin

  • Enter the following to show confadm members:

    • # ./confadm member show all

  • The following will output:

NODEID    CLIENTURLS              PEERURLS              LEADER  ID
e4aa56b6  https://IPOFBRICKSTORSP  https://x.x.x.x:xxxx  *       4f2be2ea60df7e9b
  • Next, enter the following to restart confd and show the members again:

    • # systemctl restart confd

    • # ./confadm member show all

The following will output:

NODEID    CLIENTURLS                                                                  PEERURLS                                             LEADER  ID
e4aa56b6  https://xx.x.xxx:xxxx,https://xx.x.xxx:xxxx,https://xx.x.xxx.x:xxxx  https://xx.x.xxx.x:xxxx,https://xx.x.xxx.x:xxxx.  *       4f2be2ea60df7e9b
  • Enter the following:

    • ./confadm join

  • The following will output:

Enter the bsrapid url of the leader: x.x.xx.xx
Enter username for host: bsradmin
Enter password for host:
Join cluster node qa00003j BrickStorOS 23.5.0RC.46 (2 members)? (y/n): y
  • Enter y, Press Enter.

Backup existing database? (y/n): y
  • Enter y, Press Enter.

Database saved to /var/racktop/confd/snapshots/e4aa56b6-1678900877.snap (28 KB).Joining: this may take up to 90 seconds...
NODEID    CLIENTURLS                                                                  PEERURLS                                             LEADER  ID
e4aa56b6  https://10.2.22.132:2379,https://127.0.0.1:2379,https://192.168.122.1:2379  https://10.2.22.132:2380,https://192.168.122.1:2380          123cf7e4c6b2c177
qa00003j  https://10.1.29.51:2379,https://127.0.0.1:2379                              https://10.1.29.51:2380                              *       ab48d3cbb3ed9638
qa00003i  https://10.1.29.52:2379,https://127.0.0.1:2379                              https://10.1.29.52:2380                                      fde57ba2cb24039c
  • Enter the following:

    • systemctl status confd

  • The following will output:

● confd.service - RackTop Configuration Database
   Loaded: loaded (/usr/lib/systemd/system/confd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2023-03-15 13:18:53 EDT; 2min 48s ago
Main PID: 2697 (confd)
    Tasks: 8 (limit: 100954)
   Memory: 192.0M
   CGroup: /system.slice/confd.service
           └─2697 /usr/racktop/lib/confd



Mar 15 13:21:22 localhost.localdomain confd[2697]: 2023-03-15T17:21:22Z [Info] Starting cluster state manager for e4aa56b6.
Mar 15 13:21:22 localhost.localdomain confd[2697]: 2023-03-15T17:21:22Z [Debug] Adding e4aa56b6 (123cf7e4c6b2c177) to node status manager.
Mar 15 13:21:22 localhost.localdomain confd[2697]: 2023-03-15T17:21:22Z [Debug] Adding qa00003j (ab48d3cbb3ed9638) to node status manager.
Mar 15 13:21:22 localhost.localdomain confd[2697]: 2023-03-15T17:21:22Z [Debug] Adding qa00003i (fde57ba2cb24039c) to node status manager.
Mar 15 13:21:22 localhost.localdomain confd[2697]: 2023-03-15T17:21:22Z [Info] Creating new local service account certificate.
Mar 15 13:21:22 localhost.localdomain confd[2697]: 2023-03-15T17:21:22Z [Error] Service account creation failed; local services will not be able to use confd: open /ssl/confd-local->
Mar 15 13:21:25 localhost.localdomain confd[2697]: 2023-03-15T17:21:25Z [Info] Node qa00003j (ab48d3cbb3ed9638) state is online; https://10.1.29.51:2379.
Mar 15 13:21:25 localhost.localdomain confd[2697]: 2023-03-15T17:21:25Z [Info] Node qa00003i (fde57ba2cb24039c) state is online; https://10.1.29.52:2379.
  • The confd service has now been updated. Next, hiavd will be upgraded.

  • To begin the hiavd upgrade process, enter the following:

    • systemctl status hiavd

  • The following will output:

● hiavd.service - BrickStor High Availability Service
   Loaded: loaded (/usr/lib/systemd/system/hiavd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2023-03-15 13:10:44 EDT; 16min ago
Main PID: 1021 (hiavd)
    Tasks: 8 (limit: 100954)
   Memory: 28.1M
   CGroup: /system.slice/hiavd.service
           └─1021 /usr/racktop/lib/hiavd



Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Info] Service info: hiavd 23.5.0RC.50. Copyright 2022 RackTop Systems, Inc.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Debug] Changed state to INITIALIZING.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Info] No existing configuration found in /etc/racktop/hiavd/hiavd.conf; using defaults.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Trace] Single instance; creating lock file /var/run/racktop/hiavd.pid.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Trace] Service hiavd locked with /var/run/racktop/hiavd.pid.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Debug] Changed state to STARTING.
Mar 15 13:10:44 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:44Z [Trace] Initializing channel for signals.
Mar 15 13:10:45 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:45Z [Info] Starting HTTPS server :4746 ...
Mar 15 13:10:45 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:45Z [Debug] Changed state to RUNNING.
Mar 15 13:10:45 localhost.localdomain hiavd[1021]: 2023-03-15T17:10:45Z [Trace] Tracked go routine "license check" started.

Upgrading TDM Datasets

With release 23.6, TDM now uses a new file metadata format. As a result, TDM enabled datasets in previous release versions must be manually converted after upgrading to Release 23.6 to ensure continued TDM operation.

To find out if your system contains TDM enabled datasets, jump to the List TDM Enabled Datasets section.

The upgrade/conversion process is done by using the tdmadm CLI utility. This allows users to upgrade a single or all datasets. When a dataset is not provided, it will default to all.

Throughout the upgrading process, tdmadm systematically generates periodic snapshots, enabling users to revert a dataset to its initial state should any issues arise. These checkpoint snapshots are set to expire after a duration of 24 hours, yet their validity can be prolonged by utilizing the --snap-expiry-days option.

The duration of an upgrade process is contingent solely upon the quantity of files stored within the dataset; the size of data in bytes does not impact the duration of the upgrade process. tdmadm Dry-run can be used to get an estimate how long the conversion process will take.

List TDM Enabled Datasets

To list TDM enabled datasets, type the following command into the Command Line and press Enter.

If the command does not output anything, your system does not have any TDM enabled datasets and nothing needs to be done.
Example 1. Syntax
tdmadm dataset list
System with 2 TDM enabled datasets
$ tdmadm dataset list
p00/global/data4        (1-11788394248038055233-11584324033856530000-0)
p00/global/data (1-11788394248038055233-12001546602885682788-0)

Dry-run

A dry-run simulates the upgrade process without making any changes. This provides users insight as to how long the upgrade may take.

This operation requires TDM services to be stopped. Stop TDM services using the following command: svcadm disable -t tdmd tdmfopsd
Example 2. Syntax
tdmadm dataset upgrade --logfile [progress log file path] --dry-run [dataset path]
Example 3. Specific dataset p00/global/data
tdmadm dataset upgrade --logfile /tmp/tdm-up-data.log --dry-run p00/global/data
Example 4. All datasets
tdmadm dataset upgrade --logfile /tmp/tdm-up-data.log --dry-run
When finished, start TDM services using the following command: svcadm enable tdmd tdmfopsd

Upgrading All TDM Datasets

The following steps will upgrade All TDM enabled datasets after upgrading a system to 23.6. To upgrade dataset(s) selectively see Upgrading Specified TDM Dataset section.

Example 5. Syntax
tdmadm dataset upgrade --logfile [progress log file path]
  1. Stop TDM services

    1. svcadm disable -t tdmd tdmfopsd

  2. Begin upgrade for all TDM datasets

    1. tdmadm dataset upgrade --logfile /tmp/tdm-up-data.log

  3. Start TDM services

    1. svcadm enable tdmd tdmfopsd

Upgrading Specified TDM Dataset

The following steps will upgrade a specified TDM dataset after upgrading a system to 23.6.

Example 6. Syntax
tdmadm dataset upgrade --logfile [progress log file path] [dataset path]
  1. Stop TDM services

    1. svcadm disable -t tdmd tdmfopsd

  2. Begin upgrade for a specified dataset

    1. tdmadm dataset upgrade --logfile /tmp/tdm-up-all.log p00/global/data

  3. Start TDM services

    1. svcadm enable tdmd tdmfopsd