Skip to content
Proxmox Cluster Configuration | High Availability

Proxmox

Introduction

This document describes three common ways of sharing storage to a Proxmox server from a RSF-1 cluster.

  1. An NFS share that is directly mounted on the proxmox server

  2. An SMB share that is directly mounted on the proxmox server

  3. Using an iSCSI target where Proxmox is able to dynamically create ZFS zvols and access them via an iSCSI Qualified Name (IQN).

Low maintenance, allows Proxmox to create LUNs on demand

Adding an NFS Share to Proxmox

These steps will show how to share a dataset snpool1/nfs via NFS to a Proxmox host.

Proxmox Image 1

  1. On Proxmox, Navigate to Datacenter -> Storage -> Add -> NFS:

    Proxmox Image 2

  2. Fill out the revelant information, using the service VIP as the Server address, and select the NFS Share from the Export Drop down list:

    Proxmox Image 3

  3. When finished click Add. The share is ready to use.

    Proxmox Image 4

ZFS over iSCSI

Proxmox has the ability to use an external ZFS based cluster as a storage backend for its virtual machines. When ZFS over iSCSI is configured correctly Proxmox is able to automate the process of creating a ZFS volume and then using that volume as an installation/boot disk for a virtual machine.

The ZFS over iSCSI approach offers many advantages:

  1. As the iSCSI protocol works at the block level, it can generally provide higher performance than NFS/SMB by manipulating the remote disk directly.
  2. Multiple Proxmox servers can consolidate their storage to a single, independant, clustered storage server that can grow with the environment.
  3. There is no interdependancy between Proxmox servers for the underlying storage.
  4. Leverage the benefits of clustered storage for redundant backups, hardware acceleration (NVMe for Cache/ZIL etc).
  5. Native ZFS snapshots and cloning via the Proxmox ZFS over iSCSI interface.

Note

Volumes created using ZFS over iSCSI can also be used as additional storage for existing VM's.

To configure ZFS over iSCSI a few steps are required:

  1. Identify and configure the storage pool to be used as the backend storage.
  2. Configure passwordless SSH access from the Proxmox server(s) to the storage cluster.
  3. Create an iSCSI target for use by Proxmox.
  4. Bind the storage to Proxmox.

Configure ZFS cluster service

Configure SSH access

To use ZFS over iSCSI, Proxmox requires passwordless access via ssh to the cluster. It uses this channel to create ZFS volumes, snapshots and backups and also to associate iSCSI LUN connections to ZFS volumes created via the configured target.

  1. create the keys on proxmox side using the clustered pools vip - the IP address used here MUST be the vip and MUST be the one we configure later into Proxmox

    # mkdir /etc/pve/priv/zfs
    # ssh-keygen -f /etc/pve/priv/zfs/10.6.19.21_id_rsa # create key on proxmox
    # ssh-copy-id -i /etc/pve/priv/zfs/10.6.19.21_id_rsa.pub root@10.6.19.3 # copy public key to each cluster node
    # ssh-copy-id -i /etc/pve/priv/zfs/10.6.19.21_id_rsa.pub root@10.6.19.2
    
    To test that Proxmox has access to the cluster nodes, ssh via the VIP
    # ssh -i /etc/pve/priv/zfs/10.6.19.21_id_rsa root@10.6.19.21
    

  2. Host key identification stumbling block - when you first try to connect. Need to avoid this interaction part as it will block automatic actions by Proxmox

    Host Key Identification Errors

    In the event of a node failure with a running service, the VIP is automatically moved to another node in the cluster. This will Cause Host Key Identification failures due to the underlying host of the VIP changing.

    This problem will need to be addressed as this will stop Proxmox being able to connect to the cluster via the VIP automactically. One solution for this is to copy the ssh_host keys from one cluster node to another, so when the VIP is moved, the host key will be the same for all cluster nodes.

    Another option is to disable host key verification on the Proxmox Host(s).

Options 1. disable host key identification completely from the proxmox host 2. ensure all cluister nodes have the same host key ==> recommended way /etc/ssh/ssh_host_

Copy host key from one node to the other - does not matter which one, just that it is the same on both

  1. SSH from the Proxmox host to the cluster using the VIP address, accept 'whatever bollocks is offered'
  2. logout and back again - you should NOT be asked for a password.
  3. failover the pool and retry ssh to ensure you got it right.

Have a beer.

Configure a cluster iSCSI target

Configure Proxmox

  1. To add the configured target to Proxmox, In the Proxmox GUI, Navigate to Datacenter -> Storage -> Add -> ZFS over iSCSI and fill in the appropriate fields with the relevant information:

    Proxmox Image 1

    When done click Add

Acceptance tests

  1. create vm
  2. show zvols on cluster
  3. test snapshot
  4. failover