Plain Linux Initiators
Simplyblock storage can be attached over the network to Linux hosts which are not running Kubernetes or Proxmox.
While no simplyblock components must be installed on these hosts, some OS-level configuration steps are required. Those manual steps are typically taken care of by the CSI driver or Proxmox integration.
On plain Linux initiators, those steps have to be performed manually on each host that will connect simplyblock logical volumes.
Install Nvme Client Package
=== "RHEL / Alma / Rocky"
```bash
sudo dnf install -y nvme-cli
```
=== "Debian / Ubuntu"
```bash
sudo apt install -y nvme-cli
```
Load the NVMe over Fabrics Kernel Modules
Simplyblock is built upon the NVMe over Fabrics standard and uses NVMe over TCP (NVMe/TCP) by default.
While the driver is part of the Linux kernel with kernel versions 5.x and later, it is not enabled by default. Hence, when using simplyblock, the driver needs to be loaded.
modprobe nvme-tcp
When loading the NVMe/TCP driver, the NVMe over Fabrics driver automatically get loaded to, as the former depends on its provided foundations.
It is possible to check for successful loading of both drivers with the following command:
lsmod | grep 'nvme_'
The response should list the drivers as nvme_tcp and nvme_fabrics as seen in the following example:
[demo@demo ~]# lsmod | grep 'nvme_'
nvme_tcp 57344 0
nvme_keyring 16384 1 nvme_tcp
nvme_fabrics 45056 1 nvme_tcp
nvme_core 237568 3 nvme_tcp,nvme,nvme_fabrics
nvme_auth 28672 1 nvme_core
t10_pi 20480 2 sd_mod,nvme_core
To make the driver loading persistent and survive system reboots, it has to be configured to be loaded at system startup time. This can be achieved by either adding it to /etc/modules (Debian / Ubuntu) or creating a config file under /etc/modules-load.d/ (Red Hat / Alma / Rocky).
echo "nvme-tcp" | sudo tee -a /etc/modules-load.d/nvme-tcp.conf
echo "nvme-tcp" | sudo tee -a /etc/modules
After rebooting the system, the driver should be loaded automatically. It can be checked again via the above provided
lsmod
command.
Create a Storage Pool
Before logical volumes can be created and connected, a storage pool is required. If a pool already exists, it can be reused. Otherwise, creating a storage pool can be created on any control plane node as follows:
sbctl pool add <POOL_NAME> <CLUSTER_UUID>
The last line of a successful storage pool creation returns the new pool id.
[demo@demo ~]# sbctl pool add test 4502977c-ae2d-4046-a8c5-ccc7fa78eb9a
2025-03-05 06:36:06,093: INFO: Adding pool
2025-03-05 06:36:06,098: INFO: {"cluster_id": "4502977c-ae2d-4046-a8c5-ccc7fa78eb9a", "event": "OBJ_CREATED", "object_name": "Pool", "message": "Pool created test", "caused_by": "cli"}
2025-03-05 06:36:06,100: INFO: Done
ad35b7bb-7703-4d38-884f-d8e56ffdafc6 # <- Pool Id
Create and Connect a Logical Volume
To create a new logical volume, the following command can be run on any control plane node.
sbctl volume add \
--max-rw-iops <IOPS> \
--max-r-mbytes <THROUGHPUT> \
--max-w-mbytes <THROUGHPUT> \
<VOLUME_NAME> \
<VOLUME_SIZE> \
<POOL_NAME>
sbctl volume add lvol01 1000G test
In this example, a logical volume with the name lvol01
and 1TB of thinly provisioned capacity is created in the pool
named test
. The uuid of the logical volume is returned at the end of the operation.
For additional parameters, see Add a new Logical Volume.
To connect a logical volume on the initiator (or Linux client), execute the following command on a any control plane node. This command returns one or more connection commands to be executed on the client.
sbctl volume connect \
<VOLUME_ID>
sbctl volume connect a898b44d-d7ee-41bb-bc0a-989ad4711780
sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.2 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780
sudo nvme connect --reconnect-delay=2 --ctrl-loss-tmo=3600 --nr-io-queues=32 --keep-alive-tmo=5 --transport=tcp --traddr=10.10.20.3 --trsvcid=9101 --nqn=nqn.2023-02.io.simplyblock:fa66b0a0-477f-46be-8db5-b1e3a32d771a:lvol:a898b44d-d7ee-41bb-bc0a-989ad4711780
The output can be copy-pasted to the host to which the volumes should be attached.