This document describes how to create a highly available clustered TrueNAS
system using RSF-1 software.
The base system should consist of two nodes
running the latest release of TrueNAS SCALE, with external storage connected to
both nodes concurrently (commonly referred to as shared
storage) or a pool replicated on a remote node (shared-nothing).
Features
ZFS pools created on the shared storage can be failed over between
cluster nodes - these are referred to as shared pools.
RSF-1 is an Active-Active cluster. This means a pool can be active on,
and failover to, any node in the cluster.
Multiple pools can be clustered with no interdependencies; meaning
you could have two pools on one node and
three on another and then fail over all pools from the first to the
second, or just one from the second to the first etc.
A shared pool can only be imported on any one cluster node at a
time. RSF-1 uses disk reservations to enforce this rule to protect data.
Any services configured to use a shared pool (such as NFS/SMB) are
accessible on the node the pool is imported on.
Multiple heartbeats over network and disk for shared-storage (no dedicated
heartbeat drive required - integrates with existing ZFS drives with no
reconfiguration required)
The TrueNAS System Dataset Pool
TrueNAS saves system configuration information in the System
Dataset Pool - usually the first ZFS pool created on the system.
The effect of this means that the pool containing that dataset
is not eligible for clustering (as the pool containing the system
dataset cannot be exported, and attempts to do so will result in
failure with a 'unmount failed' message).
To resolve this move the system dataset to
the boot pool (or a pool not being considered for clustering).
In the GUI navigating to System -> Advanced Settings, then on the storage
section click Configure and select boot-pool from the drop down list
of pools and finally click SAVE:
For a highly-available systems we recommend each cluster node
has a dedicated boot drive, mirrored if possible1.
Note
When the boot pool is the only imported pool, TrueNAS will
always show this as the location of the system dataset. This
configuration however is not permanent until it is actually
saved. Failure to do so leaves TrueNAS open to the possibility
of relocating the system dataset, which can cause issues in the
cluster (as outlined above). The rule here is, even if TrueNAS reports
the system dataset as residing on the boot pool, make sure that
setting is saved, thereby making it a permanent feature
(this need only be done once on each cluster node).
Accessing cluster services over the network
With a non clustered storage appliance, services such as NFS, SMB etc.
are accessed using the IP address of the storage appliance itself. For
clustered systems this causes an issue in that when the pool, and by
implication any services reliant on that pool, is migrated to
another node, those services become inaccessible using the original storage
appliances IP address (as it no longer hosts those services).
RSF-1 solves this problem by associating
a Virtual IP address (VIP) with a pool, and by implicaition any
services using the pool - the VIP is then migrated with the pool should
a fail over occur. Clients then access storage services using the VIPs
configured rather than the static IP address of the node itself, thereby
ensuring continued access no matter where the service is running.
When configuring a VIP in the cluster, either an IP address or a hostname
can be used. When using a hostname the cluster needs to resolve this to an
IP address. To ensure that this resolution is not dependent on external
naming services RSF-1 automatically adds VIP's to the TrueNAS Host Name
Database on each node in the cluster.
Install RSF-1
To install RSF-1 on TrueNAS Scale an installation tool is provided that performs the necessary steps:
Downloads the latest stable package
Creates a dataset in the boot-pool of the machine and mounts it to /opt/HAC - this allows the RSF-1 directory to be re-mounted after an update to TrueNAS
Installs bc (a required dependency)
Installs RSF-1 on to the /opt/HAC dataset
Re-enabling RSF-1 after a TrueNAS upgrade
When a TrueNAS cluster node is upgraded any existing RSF-1 installation
requires remounting as TrueNAS does not automatically re-import
custom datasets. Please see the section on Upgrading TrueNAS for
instructions.
On each cluster node perform the following steps:
In the TrueNAS GUI navigate to Network -> Global Configuration and
update the TrueNAS host name database with static entries for the
cluster nodes. This step is essential so host name lookup is not
reliant on any external services that could potentially fail. Each
node should have entries for all cluster nodes in the host name
database using the format:
IPaddress FQDN hostname
Here is an example configuration with two static entries in the
hosts file:
Start a command shell using the System Settings -> Shell menu item in the TrueNAS GUI.
Please ensure this shell is running with root privilege and not the admin user, use the id command
to check the UID is 0 - if not run sudo -i and when prompted enter the admin password.
Download the installation tool from the HAC website using wget:
# ./truenasscale-rsf-1-install
High-Availability RSF-1 TrueNAS installer/upgrade
TrueNAS version 24.10.2
1) Download and install/upgrade RSF-1 on this node
2) Quit without making any changes
Please select option 1 or 2:
Select option 1 from the menu; the tool will now download and install
the correct RSF-1 package for the version of TrueNAS it is running on:
Downloading required packages...
version file to determine which RSF-1 release to download...done
downloading the latest RSF-1 package for Truenas-Scale Version 24.10.2: rsf-1-2.3.3-f49f2f427b5fb36250f9bf0ac2dea0abb134af28-TRUENAS_SCALE-amd64.deb...done
downloading checksum file for rsf-1 TrueNAS package: rsf-1-2.3.3-f49f2f427b5fb36250f9bf0ac2dea0abb134af28-TRUENAS_SCALE-amd64.deb.sha512...done
checksum correct for downloaded package - OK to continue
Installing RSF-1 package (rsf-1-2.3.3-f49f2f427b5fb36250f9bf0ac2dea0abb134af28-TRUENAS_SCALE-amd64.deb)
2025 Mar 12 08:34:44 node-a RSF-1 sysdown: All services stopped, exiting.
Installation/update complete.
The RSF-1 webapp can be accessed at https://<hostname>:8330
These installation steps should be repeated for each cluster node.
Configure pools
If you haven't already created your cluster storage pool(s) do so now
on one of the cluster nodes using the TrueNAS GUI. Also note this must
be done using only drives from the shared storage3.
Shared Nothing
If creating a shared nothing cluster a pool will need to be created
on both nodes with the same name in the TrueNAS GUI. Step 2 below
is not required
Check the pool is visible on the second node by running zpool import, then by checking
the output against the command zpool status on node-1 you can confirm the
gptid's match on both machines:
node-2# zpool import
pool: pool1
id: 1093288960321296894
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
pool1 ONLINE
raidz3-0 ONLINE
923587e1-78d8-11ee-b18a-3daea9f4e77b ONLINE
921f40f7-78d8-11ee-b18a-3daea9f4e77b ONLINE
925493e5-78d8-11ee-b18a-3daea9f4e77b ONLINE
9227da0b-78d8-11ee-b18a-3daea9f4e77b ONLINE
924e32b1-78d8-11ee-b18a-3daea9f4e77b ONLINE
92288c6d-78d8-11ee-b18a-3daea9f4e77b ONLINE
9217ead1-78d8-11ee-b18a-3daea9f4e77b ONLINE
92272408-78d8-11ee-b18a-3daea9f4e77b ONLINE
91fbc95e-78d8-11ee-b18a-3daea9f4e77b ONLINE
92188c29-78d8-11ee-b18a-3daea9f4e77b ONLINE
923443fa-78d8-11ee-b18a-3daea9f4e77b ONLINE
9253dcdc-78d8-11ee-b18a-3daea9f4e77b ONLINE
9241a769-78d8-11ee-b18a-3daea9f4e77b ONLINE
92339ba1-78d8-11ee-b18a-3daea9f4e77b ONLINE
922935a7-78d8-11ee-b18a-3daea9f4e77b ONLINE
9234e0c5-78d8-11ee-b18a-3daea9f4e77b ONLINE
948cdf41-78d8-11ee-b18a-3daea9f4e77b ONLINE
9490720c-78d8-11ee-b18a-3daea9f4e77b ONLINE
94961bc0-78d8-11ee-b18a-3daea9f4e77b ONLINE
948fc902-78d8-11ee-b18a-3daea9f4e77b ONLINE
3. Finally, navigate to the RSF-1 secure web interface running on port
8330 on the node where the shared pool is imported5 to complete
the cluster configuration.
Create cluster
To begin configuration, click on Create/Destroy option on the
side-menu (or the shortcut on the panel shown when first logging in).
The Cluster Create page scans for clusterable nodes (those running
RSF-1 that are not yet part of a cluster)
and presents them for selection:
Now enter the cluster name and description, and then
select the type of cluster being created (either shared-storage or
shared-nothing).
If setting up a shared-nothing cluster an additional option to add a
node manually is shown at the bottom of the page. This is because
RSF-1 will detect nodes on the local network, but for shared-nothing
clusters, the partner node could be on a separate
network/location, and therefore may not automatically be detected1.
Trial Licenses
If any of the selected nodes have not been licensed,
a panel is shown to obtain 45 day trial licenses:
Next, the RSF-1 End User License Agreement (EULA) will
be displayed. Click accept to proceed:
API Keys
As of version 1.11, RSF-1 requires API Keys to interact with TrueNAS
to import/export pools. These can be created in the TrueNAS GUI.
As each key is created add it to the RSF-1 API keys field for the relevant node. Once both
keys have been copied, click the TEST API KEYS button to verify the keys:
Finally click the Create Cluster button to initialize the cluster:
When the cluster has been created, you can enable support for disk
multipathing in RSF-1 if the disks have already been configured:
This setting can be modified after cluster set-up if needed.
It can be found in Settings -> TrueNAS.
Enabling Multipath Support
If the nodes have been configured to use disk multipathing you
must enable multipath support otherwise disk reservations
will not function correctly. Do not enable if disks are
configured for singlepath only.
Clustering a Pool
Highlight the desired pool
to be clustered (choose only pools marked CLUSTERABLE ), then select Actions
followed by Cluster this pool:
Fill out the description and select the preferred node for the
service:
What is a preferred node
When a service is started, RSF-1 will initially attempt to run it on it's
preferred node. Should that node be unavailable (node is
down, service is in manual etc) then the service will be started
on the next available node.
With a shared-nothing pool the GUID's for each pool will be shown:
To add a virtual hostname to the service click Add in the Virtual
Hostname panel. Enter the IP address, and optionally a hostname, in the
popup. For nodes with multiple network interfaces, use the drop down
lists to select which interface the virtual hostname should be assigned
to. Click the next button to continue:
Finally, click the Create button:
The pool will now show as CLUSTERED:
Setting up shares on clustered pools
TrueNAS uses a local configuration to save details of shares
created for a pool (NFS, SMB etc). When a pool in a cluster
fails over from one node to another that share information is not migrated
with the pool. For this reason, when setting up a new share on a clustered
pool, it is necessary to duplicate the share
configuration on each node in the cluster.
For example, in a cluster with two nodes, Node-A and Node-B,
with clustered pool nas-shares, to share
/mnt/nas-shares/user-data via NFS the following steps are required:
Start the service configured with the nas-shares pool on Node-A.
Add the NFS share:
Fail over the service to Node-B.
Again add the NFS share using the same parameters as were used
on Node-A.
Note - this configuration step needs only be done once on the cluster
for each share (but will need to be repeated for each additional share).
Setting up iSCSI share on clustered pools
As with NFS and SMB shares, TrueNAS uses a local configuration to save
details of iSCSI shares created for a pool, and thus any new iSCSI share
created requires it's config to be duplicated on
each node in the cluster.
For example, in a cluster with two nodes, Node-A and Node-B,
to create an iSCSI share:
On Node-A, start the service you want to create iSCSI shares on.
Navigate to Datasets and create a Zvol; in this example
we have created a 1GB zvol called iscsishare using default options.
Navigate to Shares -> Block (iSCSI) Shares Targets and create an iSCSI
share using the wizard:
In step 2 (Portal), make sure the IP address entered
is 0.0.0.0 (this is so it can be accessed using the cluster
VIP that moves with the service in the event of a failover).
If desired, provide initiators/authorized networks
in the next step (in this example
it's been left blank so any client/network
can connect to the target). Click SAVE:
Now move the service over to Node-B, and re-create
the iSCSI share using the same perameters as used on Node-A
Your iSCSI share should now be discoverable via the VIP (in this
example the service VIP is 10.0.0.127)6:
Synchronising iSCSI NAA identifiers across cluster nodes
Available in RSF-1 version 2.1+
Some iSCSI clients use identifiers to uniquely identify devices. They
are based on specific storage standards in one of the following formats:
eui.xxx
naa.xxx
t10.xxx
Whenever an iSCSI Extent is created, TrueNAS automatically generates and assignes a unique serial number and NAA.
In a clustered environment both these fields must be the same on each node for a specific extent,
so that on pool failover clients are presented with the same identifiers for uninterrupted operation.
However, as each iSCSI extent is created in isolation on each cluster node, initially the serial number
and NAA will be different.
In the following example the extent iscsishare has been created on both cluster nodes and, as can be seen,
each node has a different serial number and NAA:
Node 1Node 2
In order to synchronise these fields across cluster nodes navigate to Settings -> TrueNAS in the RSF-1 webapp
and click the SYNCHRONISE button to replicate the identifiers to all cluster nodes:
Note
It is necessary to synchronise identifiers every time new extents are created.
Synchronisation can be run on any node, as often as required.
Upgrading TrueNAS
Whenever a TrueNAS installation is upgraded to a new revision it is necessary to
re-enable the existing RSF-1 installation to complete the TrueNAS
upgrade; failure to do so will mean the RSF-1 cluster processes
will not be started and the host will be unable to re-join the
cluster.
Note
The existing RSF-1 configuraiton and licenses are
preserved after a TrueNAS upgrade.
The installation tool, downloaded as part of the original install,
is used to remount the cluster dataset. Either use
the existing tool
or re-download by following steps 2-5 of the installation process
documented above.
Run the installation tool:
# ./truenasscale-rsf-1-install
High-Availability RSF-1 TrueNAS installer/upgrade
TrueNAS version 24.10.2
1) Download and install/upgrade RSF-1 on this node
2) Remount an existing RSF-1 installation after a TrueNAS upgrade
3) Quit without making any changes
Please select option 1, 2 or 3:
Select option 2 to perform the restore:
Mounting /opt/HAC
Restoring RSF-1 systemd service units
Re-enabling RSF-1 systemd services:
Enabling [rsf-1]
Enabling [rsf-rest]
Enabling [rsf-gui]
Enabling [rpcstmfha]
Enabling [rsf-zfsrepl]
Enabling [hac-tls]
Enabling [rsf-failfast]
Installation/update complete.
The RSF-1 webapp can be accessed at https://node-a:8330
The RSF-1 post upgrade step must be repeated for each TrueNAS node upgraded.
Startup/running performance can also be improved by using
SSD or NVMe disks as the boot drive(s). ↩↩
Because if this service is unavailable when the cluster tries to
resolve the hostname then service startup cannot continue in a
normal fashion. ↩
If any drives in a clustered pool are local to a node, i.e. do not
reside in the shared storage, then this will result in a failure to
import on any other cluster node as that local drive will be
inaccessible. For this reason it is mandatory that all cluster drives
reside in shared storage. ↩
Use the GUI ** >_ Shell ** menu item to access the cli. ↩
If the pool is imported on say truenas-node2 then the URL is
https://truenas-node2:4330. ↩
TrueNAS uses the default global IQN of iqn.2005-10.org.freenas.ctl
suffixed with the name of the share. ↩