COMSTAR Target
Configuring COMSTAR as an iSCSI target host for Solaris, OmniOS, OpenIndiana, Illumos etc.
This document describes how to configure OmniOS as a highly available
iSCSI target host in an RSF-1 clustered environment.
In this example a previously created ZFS pool named pool1
is used
as backing store for the iSCSI targets.
This pool is in turn clustered on two
nodes, live01
and live02
. In the cluster,
iSCSI targets are exposed to clients
(initiators) via a Virtual IP address or VIP (also referred to
as a floating IP address).
This virtual IP address is bound to the backing store and moves,
or floats, with the storage as it fails over between cluster nodes.
The combination of backing store, application (iSCSI in this case) and virtual IP address is referred to as an RSF-1 clustered service.
OmniOS uses the COMSTAR1 framework to provide its iSCSI services.
For clustering, the utility /opt/HAC/RSF-1/bin/stmfha
is used to
configure COMSTAR rather than the system supplied utilities
stmfadm
and itadm
. This is because the actions of stmfha
are performed cluster wide, unlike the system utilities that operate
on a single node only. The stmfha
command
implements a superset of the operations available in both
stmfadm
and itadm
.
Note that in this walkthrough pool specific operations are performed on the host on which the clustered service, and by implication, the pool, is running on.
-
On both nodes install and enable the iSCSI target package:
If the required package is already installed you will receive the message No updates necessary for this image.# pkg install network/iscsi/target # svcadm enable svc:/system/stmf:default # svcadm enable svc:/network/iscsi/target:default
-
Create a ZFS block device (zvol) to use as the backing storage for the iSCSI target. The
-V
option is required to create a volume of the given size (without it thezfs create
command will attempt to create a ZFS file system, rather than a volume within a ZFS file system). In this example the volumezvol1
is created as part of storage poolpool1
with a size of 1Gb:2# zfs create -V 1G pool1/zvol1
-
Create a Target Portal Group (TPG) using the VIP address configured for use with
pool1
along with a port on which iSCSI services will listen for incoming requests from clients. In this example the VIP address is192.168.5.10
and the port3260
(the default port for iSCSI services as documented in rfc3720):Because this is a clustered iSCSI configuration the target portal group is created on both nodes in the cluster - this symetric configuration is required for iSCSI failover. Note that the TPG will only be active on any one node at any one time due to the use of the cluster virtual IP address used when the TPG was created.# stmfha create-tpg TPG01 192.168.5.10:3260 live01: create-tpg: TPG01 successfully created live02: create-tpg: TPG01 successfully created
To check the TPG's configured into the cluster run the following command (note the-v
verbose argument to retrieve as much detail as possible):# stmfha list-tpg -v live01: TARGET PORTAL GROUP PORTAL COUNT TPG01 1 portals: 192.168.5.10:3260 live02: TARGET PORTAL GROUP PORTAL COUNT TPG01 1 portals: 192.168.5.10:3260
-
Create an iSCSI target. In this example chap authentication is used (
--auth-method chap
), meaning authentication is required to connect to this target. The target has been given the alias ofzvol1-iscsi
to assist in identifiying it and finally it is associated with the TPG created in the previous step (--tpg TPG01
- note when creating a target it is possible to specify multiple TPG membership using the format--tpg TPG01,TPG03,ACCTPG
):To list the targets:# stmfha create-target --auth-method chap --chap-secret not-so-secret --alias zvol1-iscsi --tpg TPG01 live01: Target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 successfully created live02: Target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 successfully created
# stmfha list-target -v dev3: TARGET NAME STATE SESSIONS iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 online 0 alias: zvol1-iscsi auth: chap targetchapuser: - targetchapsecret: set tpg-tags: TPG01 = 2 dev4: TARGET NAME STATE SESSIONS iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 online 0 alias: zvol1-iscsi auth: chap targetchapuser: - targetchapsecret: set tpg-tags: TPG01 = 2
- Create a target group (TG) to which your target will be added to.
To list the target groups:
# stmfha create-tg TG01 live01: Target group created live02: Target group created
# stmfha list-tg -v dev3: Target Group: TG01 dev4: Target Group: TG01
- Next associate the newly created target group with the target. To do this the target must first be offlined:
Now add your target to the target group:
# stmfha offline-target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 live01: Target offlined live02: Target offlined
Listing the target group should now show the target added to the target group:# stmfha add-tg-member --group-name TG01 iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 live01: Target group member added live02: Target group member added
Finally bring your target back online:# stmfha list-tg -v dev3: Target Group: TG01 Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 dev4: Target Group: TG01 Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
# stmfha online-target iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 live01: Target onlined live02: Target onlined
- Create a logical unit using the zvol created earlier
(
/dev/zvol/rdsk/pool1/zvol1
- note the full path of the zvol should be provided to thecreate-lu
subcommand). The reason we use "rdsk" over "dsk" is raw disk devices transfer data to and from the disk directly; block devices transfer data to an in memory buffer cache first, then flush to the disk at some time later, which will at a minimum cause performance issues but also increase the chances of lost data (as data in memory can be lost on system failure).
The logical unit must be created on the node the service is running on (i.e. the node the pool is imported). In the previous steps the iSCSI component parts (target groups, target portal groups etc.) are created on both nodes as this part of the iSCSI configuration can, and should be, shared across the cluster. However, because the logical unit references the underlying physical volume, it is only created, and is only visible, on any one node at any one time - the cluster will handle migration of the logical unit as part of failover.# stmfha create-lu /dev/zvol/rdsk/pool1/zvol1 live01: Logical unit created: 600144F09CF9DD070000614DEADE0001
- Finally add a view to the logical unit
600144F09CF9DD070000614DEADE0001
using the target groupTG01
, again on the node where the service is running. A view is an association of a host group, a target group and a logical unit number, with a logical unit. A host group is a group of initiators that are allowed access to the logical unit - when unspecified, as in this example, access is granted to all initiators. The same is true when no target group is specified. If no logical unit number is specified the system automatically assigns one:By creating this view all targets declared in target group# stmfha add-view -t TG01 600144F09CF9DD070000614DEADE0001 live01: 600144F09CF9DD070000614DEADE0001: view entry 0 created for LUN 0
TG01
have access to the logical unit, and similarly any of the targets in that group that also appear in a target portal group (in this exampleTPG01
) make the logical unit discoverable to external initiators.
Use thelist-lu
subcommand to check the completed iSCSI view:Note that the view only exists on the node where the service is running. When the cluster fails the service over to another node, part of the startup procedure will recreate the view there.# stmfha list-lu -v live01: LU Name: 600144F09CF9DD070000614DEADE0001 Operational Status : Online Provider Name : sbd Alias : /dev/zvol/rdsk/pool1/zvol1 View Entry Count : 1 View-entry 0 : Host group 'all' Target group 'TG01' LUN '0' Data File : /dev/zvol/rdsk/pool1/zvol1 Meta File : not set Size : 1073741824 Block Size : 512 Management URL : not set Vendor ID : SUN Product ID : COMSTAR Serial Num : not set Write Protect : Disabled Writeback Cache : Disabled Access State : Active live02:
In the above example a single view has been created, labeledView-entry 0
. Because no host group was specified, the wildcardall
is displayed and the system has assigned logical unit number 0.
Inspecting the configuration
Once the target has been created, the stmfha
command can be used to
inspect the configuration:
-
First of all list the targets in the system:
This shows the targets available on both systems, using chap authentication, and belonging to the target portal group# stmfha list-target -v dev3: TARGET NAME STATE SESSIONS iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 online 0 alias: zvol1-iscsi auth: chap targetchapuser: - targetchapsecret: set tpg-tags: TPG01 = 2 dev4: TARGET NAME STATE SESSIONS iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 online 0 alias: zvol1-iscsi auth: chap targetchapuser: - targetchapsecret: set tpg-tags: TPG01 = 2
TPG01
. -
Next list the target portal groups so the targets can be tied to an IP/port address:
This shows us the target# stmfha list-tpg -v dev3: TARGET PORTAL GROUP PORTAL COUNT TPG01 1 portals: 192.168.5.10:3260 dev4: TARGET PORTAL GROUP PORTAL COUNT TPG01 1 portals: 192.168.5.10:3260
iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
can be accessed via IP address192.168.5.10
on port3260
. -
At this point we know the target and we know the IP address/port it will be discoverable on. Next check which target groups the target is a member of:
This shows us the target is a member of target group# stmfha list-tg -v dev3: Target Group: TG01 Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4 dev4: Target Group: TG01 Member: iqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
TG01
. -
Now, list the logical units to show which views they are members of, along with associated target groups:
In this example the zvol# stmfha list-lu -v live01: LU Name: 600144F09CF9DD070000614DEADE0001 Operational Status : Online Provider Name : sbd Alias : /dev/zvol/rdsk/pool1/zvol1 View Entry Count : 1 View-entry 0 : Host group 'all' Target group 'TG01' LUN '0' Data File : /dev/zvol/rdsk/pool1/zvol1 Meta File : not set Size : 1073741824 Block Size : 512 Management URL : not set Vendor ID : SUN Product ID : COMSTAR Serial Num : not set Write Protect : Disabled Writeback Cache : Disabled Access State : Active live02:
/dev/zvol/rdsk/pool1/zvol1
has one view, which is a member of target groupTG01
, and that target group has targetiqn.1995-10.com.high-availability:02:b11f6a06-c9bd-cfeb-ea26-885a25d080c4
as its member. Finally as that target is discoverable via the target portal groupTPG01
on IP address192.168.5.10
, port3260
the path the initiator takes to the underlying storage can be established.
Troubleshooting
My service is going "broken_unsafe" after creating iSCSI Logical units.
This could be because you have used the incorrect path when creating your Logical Units. Ensure you have used the path "/dev/zvol/rdsk".