Skip to content

Plain Linux Initiators

Simplyblock storage can be attached over the network to Linux hosts which are not running Kubernetes, Proxmox or OpenStack.

While no simplyblock components must be installed on these hosts, some OS-level configuration steps are required. Those manual steps are typically taken care of by the CSI driver or Proxmox integration.

On plain Linux initiators, those steps have to be performed manually on each host that will connect simplyblock logical volumes.

Install Nvme Client Package

=== "RHEL / Alma / Rocky"

    ```bash
    sudo dnf install -y nvme-cli
    ```

=== "Debian / Ubuntu"

    ```bash
    sudo apt install -y nvme-cli
    ```

Load the NVMe over Fabrics Kernel Modules

For NVMe over TCP and NVMe over RoCE:

Simplyblock is built upon the NVMe over Fabrics standard and uses NVMe over TCP (NVMe/TCP) by default.

While the driver is part of the Linux kernel with kernel versions 5.x and later, it is not enabled by default. Hence, when using simplyblock, the driver needs to be loaded.

Loading the NVMe/TCP driver
modprobe nvme-tcp
Loading the NVMe/RDMA driver
modprobe nvme-rdma

When loading the NVMe/TCP or NVMe/RDMA driver, the NVMe over Fabrics driver automatically get loaded too, as the former depends on its provided foundations.

It is possible to check for successful loading of both drivers with the following command:

Checking the drivers being loaded
lsmod | grep 'nvme_'

The response should list the drivers as nvme_tcp and nvme_fabrics as seen in the following example:

Example output of the driver listing
[demo@demo ~]# lsmod | grep 'nvme_'
nvme_tcp               57344  0
nvme_keyring           16384  1 nvme_tcp
nvme_fabrics           45056  1 nvme_tcp
nvme_core             237568  3 nvme_tcp,nvme,nvme_fabrics
nvme_auth              28672  1 nvme_core
t10_pi                 20480  2 sd_mod,nvme_core

To make the driver loading persistent and survive system reboots, it has to be configured to be loaded at system startup time. This can be achieved by either adding it to /etc/modules (Debian / Ubuntu) or creating a config file under /etc/modules-load.d/ (Red Hat / Alma / Rocky).

echo "nvme-tcp" | sudo tee -a /etc/modules-load.d/nvme-tcp.conf
echo "nvme-tcp" | sudo tee -a /etc/modules

After rebooting the system, the driver should be loaded automatically. It can be checked again via the above provided lsmod command.

Create a Storage Pool

Before logical volumes can be created and connected, a storage pool is required. If a pool already exists, it can be reused. Otherwise, creating a storage pool can be created on any control plane node as follows:

Create a Storage Pool
sbctl pool add <POOL_NAME> <CLUSTER_UUID>

The last line of a successful storage pool creation returns the new pool id.

Example output of creating a storage pool
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a
2025-03-05 06:36:06,093: INFO: Adding pool
2025-03-05 06:36:06,098: INFO: {"cluster_id": "4502977c-ae2d-4046-a8c5-ccc7fa78eb9a", "event": "OBJ_CREATED", "object_name": "Pool", "message": "Pool created test", "caused_by": "cli"}
2025-03-05 06:36:06,100: INFO: Done
ad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id

Create and Connect a Logical Volume

To create a new logical volume, the following command can be run on any control plane node.

sbctl volume add \
  --max-rw-iops <IOPS> \
  --max-r-mbytes <THROUGHPUT> \
  --max-w-mbytes <THROUGHPUT> \
  --ndcs <DATA CHUNKS IN STRIPE> \
  --npcs <PARITY CHUNKS IN STRIPE>
  --fabric {tcp, rdma}
  --lvol-priority-class <1-6>
  <VOLUME_NAME> \
  <VOLUME_SIZE> \
  <POOL_NAME>

Info

The parameters ndcs and npcs define the erasure-coding schema (e.g., --ndcs=4 --npcs=2). The settings are optional. If not specified, the cluster default is chosen. Valid for ndcs are 1, 2, and 4, and for npcs 0,1, and 2. However, it must be considered that the number of cluster nodes must be equal to or larger than (ndcs + npcs).

The parameter --fabric defines the fabric by which the volume is connected to the cluster. It is optional and the default is tcp. The fabric type rdma can only be chosen for hosts with an RDMA-capable NIC and for clusters that support RDMA. A priority class is optional as well and can be selected only if the cluster defines it. A cluster can define 0-6 priority classes. The default is 0.

Example of creating a logical volume
sbctl volume add --ndcs 2 --ndcs 1 --fabric tcp lvol01 1000G test  

In this example, a logical volume with the name lvol01 and 1TB of thinly provisioned capacity is created in the pool named test. The uuid of the logical volume is returned at the end of the operation.

For additional parameters, see Add a new Logical Volume.

To connect a logical volume on the initiator (or Linux client), execute the following command on a any control plane node. This command returns one or more connection commands to be executed on the client.

sbctl volume connect \
  <VOLUME_ID>
Example of retrieving the connection strings of a logical volume
sbctl volume connect a898b44d-d7ee-41bb-bc0a-989ad4711780

sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.2 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780
sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.3 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780

The output can be copy-pasted to the host to which the volumes should be attached.