Frequently Asked Question

Building a iSCSI NFS Proxy for connecting iSCSI to Proxmox Nodes
Last Updated 3 days ago

iSCSI-NFS Proxy Setup Guide

In order to use iSCSI with more than one Proxmox node, you MUST proxy it through a single host - this is because ext4, BTRFS, XFS and ZFS are not 'cluster aware' and cannot have more than one physical host accessing the same storage volume at the same time. This document assumes that your LUN is empty and unused - obviously if it's not then don't proceed with the formatting section. In this example we're using Alma Linux 10 but the principle is the same in your chosen distro. 

1. Install iSCSI Initiator

[root@localhost ~]# dnf install iscsi-initiator-utils -y

Last metadata expiration check: 1:15:10 ago on Tue Jul  8 10:12:27 2025.
Dependencies resolved.
Running scriptlet: iscsi-initiator-utils-6.2.1.9-22.gita65a472.el10.x86_64
Created symlink '/etc/systemd/system/sysinit.target.wants/iscsi-starter.service' → '/usr/lib/systemd/system/iscsi-starter.service'.
Created symlink '/etc/systemd/system/sockets.target.wants/iscsid.socket' → '/usr/lib/systemd/system/iscsid.socket'.
Created symlink '/etc/systemd/system/sysinit.target.wants/iscsi-onboot.service' → '/usr/lib/systemd/system/iscsi-onboot.service'.

Installed:
  iscsi-initiator-utils-6.2.1.9-22.gita65a472.el10.x86_64
  iscsi-initiator-utils-iscsiuio-6.2.1.9-22.gita65a472.el10.x86_64
  isns-utils-libs-0.103-1.el10.x86_64

Complete!

2. Verify Installation and Discover iSCSI Targets

[root@localhost ~]# iscsiadm --version
iscsiadm version 6.2.1.9

[root@localhost iscsi]# iscsiadm -m discovery -t sendtargets -p 10.1.1.4

10.1.1.4:3260,1 iqn.2000-01.com.synology:ARCHIVE.Target-1.proxmox

In this example we are looking at the host 10.1.1.4, and we see one LUN from a Synology SAN device, and we'll use this for the rest of the example. Obviously yours will be different to this.

3. Connect to iSCSI Target

[root@localhost iscsi]# iscsiadm -m node -T iqn.2000-01.com.synology:ARCHIVE.Target-1.proxmox -p 10.1.1.4 --login

Logging in to [iface: default, target: iqn.2000-01.com.synology:ARCHIVE.Target-1.proxmox, portal: 10.1.1.4,3260]
Login to [iface: default, target: iqn.2000-01.com.synology:ARCHIVE.Target-1.proxmox, portal: 10.1.1.4,3260] successful.

IF your SAN solution requires CHAP authentication then you will have to do some extra work. The /etc/iscsi/iscsid.conf file needs the following parameters added:

  • node.session.auth.authmethod = CHAP
  • node.session.auth.username = your_username
  • node.session.auth.password = your_password

If you're using mutual chap (rare) then you may also need to set node.session.auth.username_in and ...password_in. There are other fine tuning parameters which may be necessary in very rare cases so check out the documentation if needed.

4. Verify Connection and Check Block Devices

[root@localhost iscsi]# iscsiadm -m session

tcp: [1] 10.1.1.4:3260,1 iqn.2000-01.com.synology:ARCHIVE.Target-1.proxmox (non-flash)

[root@localhost iscsi]# lsblk

NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0   32G  0 disk 
├─sda1               8:1    0    1M  0 part 
├─sda2               8:2    0    1G  0 part /boot
└─sda3               8:3    0   31G  0 part 
  ├─almalinux-root 253:0    0 27.8G  0 lvm  /
  └─almalinux-swap 253:1    0  3.2G  0 lvm  [SWAP]
sdb                  8:16   0    2T  0 disk 
sr0                 11:0    1  1.3G  0 rom

Here we can see our iSCSI LUN connected as sdb (/dev/sdb), yours may well be different.

5. Configure iSCSI Service for Automatic Startup

[root@localhost iscsi]# systemctl enable iscsid

Created symlink '/etc/systemd/system/multi-user.target.wants/iscsid.service' → '/usr/lib/systemd/system/iscsid.service'.

[root@localhost iscsi]# systemctl start iscsid

[root@localhost iscsi]# iscsiadm -m node -T iqn.2000-01.com.synology:ARCHIVE.Target-1.proxmox -p 10.1.1.4 --op update -n node.startup -v automatic

REMEMBER: In this example ours is /dev/sdb, yours may be different.

6. Format and Mount iSCSI Device

ONLY IF THE LUN IS EMPTY! In this example we're using ext4, but you could of course format with XFS, ZFS or BTRFS (after installing the applicable support packages) - double check you have the right block device before doing this, there's no way back if the device already had a filesystem.

[root@localhost iscsi]# mkfs.ext4 /dev/sdb
mke2fs 1.47.1 (20-May-2024)
Creating filesystem with 536870912 4k blocks and 134217728 inodes
Filesystem UUID: c0c74140-f96a-4f8e-8189-164f25bacfca
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
    102400000, 214990848, 512000000

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

[root@localhost iscsi]# mkdir /mnt/iscsi
[root@localhost iscsi]# mount /dev/sdb /mnt/iscsi

7. Verify Mount and Configure Persistent Mounting

[root@localhost iscsi]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/almalinux-root   28G  1.4G   27G   5% /
devtmpfs                    4.0M     0  4.0M   0% /dev
tmpfs                       1.8G     0  1.8G   0% /dev/shm
tmpfs                       731M  8.6M  723M   2% /run
tmpfs                       1.0M     0  1.0M   0% /run/credentials/systemd-journald.service
/dev/sda2                   960M  227M  734M  24% /boot
tmpfs                       1.0M     0  1.0M   0% /run/credentials/getty@tty1.service
tmpfs                       366M  4.0K  366M   1% /run/user/0
/dev/sdb                    2.0T   28K  1.9T   1% /mnt/iscsi

Add to /etc/fstab for persistent mounting:

[root@localhost iscsi]# nano /etc/fstab

Add the following line:

/dev/sdb /mnt/iscsi ext4 _netdev 0 0

Note: After editing fstab, reboot the system to test persistent mounting.

8. Install and Configure NFS Server

[root@localhost iscsi]# dnf install nfs-utils

Last metadata expiration check: 1:46:15 ago on Tue Jul  8 10:12:27 2025.
Installed:
  gssproxy-0.9.2-10.el10.x86_64
  libev-4.33-14.el10.x86_64
  libnfsidmap-1:2.8.2-3.el10.x86_64
  libtirpc-1.3.5-1.el10.x86_64
  libverto-libev-0.3.2-10.el10.x86_64
  nfs-utils-1:2.8.2-3.el10.x86_64
  quota-1:4.09-9.el10.x86_64
  quota-nls-1:4.09-9.el10.noarch
  rpcbind-1.2.7-3.el10.x86_64
  sssd-nfs-idmap-2.10.2-3.el10_0.2.x86_64

Complete!

9. Enable and Start NFS Server

[root@localhost iscsi]# systemctl enable nfs-server
Created symlink '/etc/systemd/system/multi-user.target.wants/nfs-server.service' → '/usr/lib/systemd/system/nfs-server.service'.

[root@localhost iscsi]# systemctl start nfs-server

10. Configure NFS Export

[root@localhost iscsi]# chown nobody:nobody /mnt/iscsi
[root@localhost iscsi]# chmod 777 /mnt/iscsi
[root@localhost iscsi]# vi /etc/exports

Add the following line:

/mnt/iscsi 10.1.0.0/22(sync,wdelay,hide,no_subtree_check,sec=sys,rw,insecure,root_squash,no_all_squash)

This line means that /mnt/iscsi is accessable from any host in the 10.1.0.0/22 subnet (this can be a subnet or a single address), and some parameters:

  • sync: All changes to the exported filesystem are committed to disk before the server replies to the client. This ensures data integrity but may reduce performance.
  • wdelay: The server may delay writing to disk if it suspects another related write request is imminent. This can improve performance by grouping writes together, but may slightly increase the risk of data loss in a crash.
  • hide: If a client mounts an exported directory that is a mount point for another filesystem, the contents of the underlying directory are hidden unless that filesystem is also exported.
  • no_subtree_check: Disables subtree checking, which is a security feature that checks if a file is within the exported directory. Disabling it improves performance, especially when exporting directories that are mount points themselves.
  • sec=sys: Specifies the security flavor to use. `sys` means standard UNIX authentication (UID/GID). Other flavors (like `krb5`) may be available on some systems.
  • rw: Grants read and write access to the export for the specified clients. Without this, the default is often read-only (`ro`).
  • insecure: Allows clients to connect from ports above 1024 (not just “secure” ports below 1024). This is sometimes needed for compatibility with certain clients.
  • root_squash: Maps requests from the root user on the client to the anonymous UID/GID (usually `nobody`). This prevents root on the client from having root privileges on the server, improving security.
  • no_all_squash: Disables mapping of all client users to the anonymous UID/GID. Only the root user is squashed (if `root_squash` is set); all other users retain their actual UID/GID

  • There are many ways to do this, and you need to decide on how you're going to authenticate NFS, and to handle UID and GID mappings, this configuration works always but you can lock it down further. See the nfs-server documentation for more information on that.

    11. Apply NFS Export Configuration

    [root@localhost iscsi]# exportfs -a
    [root@localhost iscsi]# exportfs -v
    /mnt/iscsi 10.1.0.0/22(sync,wdelay,hide,no_subtree_check,sec=sys,rw,insecure,root_squash,no_all_squash)

    12. Configure Firewall for NFS

    This is specific to firewalld, if you're using a different firewall solution then just make sure that these services are permitted.

    [root@localhost iscsi]# firewall-cmd --add-service=nfs --permanent
    [root@localhost iscsi]# firewall-cmd --add-service=mountd --permanent
    [root@localhost iscsi]# firewall-cmd --add-service=rpc-bind --permanent
    [root@localhost iscsi]# firewall-cmd --reload

    Summary

    This guide demonstrates how to set up an iSCSI-NFS proxy server that:

    • Connects to a remote iSCSI target (Synology NAS)
    • Mounts the iSCSI device locally
    • Exports the mounted storage via NFS to the local network

    The setup allows clients on the 10.1.0.0/22network to access the remote iSCSI storage through NFS.

    Next on your Proxmox cluster, into Datacenter / Storage / Add / NFS

    Give it a name (ID)

    Give it the IP of your iSCSI Proxy

    It will then popular /mnt/iscsi (or whatever you called the export0

    Set the content or leave it as Disk Image

    Click ADD. 

    At this point your iSCSI LUN will be available to all the nodes on your cluster in a safe and functional way using NFS without the risk of filesystem corruption. 

    There are still caveats even with NFS, like never have two nodes accessing the same guest - and yes, Proxmox will try and stop you doing it, but you can work around that and still do it. Having the same virtual machine running on two nodes with the same shared storage WILL BREAK IT very quickly. 

    One Node -> Guest -> Guests disks on NFS. 


    As always, GEN are Proxmox experts, and provide legendary Proxmox support, so if you get stuck and want to use our services then please head over and raise a case. 

    This website relies on temporary cookies to function, but no personal data is ever stored in the cookies.
    OK
    Powered by GEN UK CLEAN GREEN ENERGY

    Loading ...