Skip to content

Deploy Storage Nodes

After installing the simplyblock operator, the next step is to create a storage cluster, deploy storage nodes, create a storage pool, and enable volume provisioning via the CSI driver.

Info

In a Kubernetes deployment, not all Kubernetes workers have to become part of the storage cluster. Simplyblock uses node labels to identify Kubernetes workers that are deemed as storage hosting instances.

It is common to add dedicated Kubernetes worker nodes for storage to the same Kubernetes cluster. They can be separated into a different node pool, and using a different type of host. In this case, it is important to remember to taint the Kubernetes worker accordingly to prevent other services from being scheduled on this worker.

OpenShift Prerequisites

If you are deploying onto an OpenShift cluster, ensure that the environment-specific instructions provided in the OpenShift Installation guide are followed.

Networking Configuration

Multiple ports are required to be opened on storage node hosts.

Ports using the same source and target networks (VLANs) will not require any additional firewall settings.

Opening ports may be required between the control plane and storage networks as those typically reside on different VLANs.

Service Direction Source / Target Network Port(s) Protocol(s)
ICMP ingress control - ICMP
Storage node API ingress storage 5000 TCP
spdk-firewall-proxy ingress storage 5001 TCP
spdk-http-proxy ingress storage, control 8080-8180 TCP
hublvol-nvmf-subsys-port ingress storage, control 9030-9059 TCP
internal-nvmf-subsys-port ingress storage, control 9060-9099 TCP
lvol-nvmf-subsys-port ingress storage, control 9100-9200 TCP
FoundationDB egress storage 4500 TCP
Control plane API egress control 80 TCP

Creating a Storage Cluster

Once the operator is running, create a storage cluster by applying a StorageCluster CRD:

Example: storage-cluster.yaml
apiVersion: simplyblock.simplyblock.io/v1alpha1
kind: StorageCluster
metadata:
  name: my-cluster
  namespace: simplyblock
spec:
  clusterName: production
  mgmtIfname: eth0
  haType: ha
  stripe:
    dataChunks: 2
    parityChunks: 1
  fabric: tcp
Apply the cluster resource
kubectl apply -f storage-cluster.yaml

Check the cluster status:

Check cluster status
kubectl get simplyblockstoragecluster -n simplyblock

Cluster Options

For NVMe-oF transport security, backup configuration, and other cluster options, see Cluster Deployment Options.

Deploying Storage Nodes

Apply a StorageNode CRD to deploy storage nodes:

Example: storage-nodes.yaml
apiVersion: simplyblock.simplyblock.io/v1alpha1
kind: StorageNode
metadata:
  name: storage-nodes
  namespace: simplyblock
spec:
  clusterName: production
  clusterImage: "public.ecr.aws/simply-block/simplyblock:26.1.2"
  maxLogicalVolumeCount: 100
  workerNodes:
    - worker-1
    - worker-2
  maxSize: "500G"
  partitions: 1
  coreIsolation: true
Apply the storage node resource
kubectl apply -f storage-nodes.yaml

Storage Node Parameters

Parameter Description Default
clusterName Name of the cluster this node belongs to. Required.
clusterImage Storage-node image. Required when action is not specified.
maxLogicalVolumeCount Maximum number of logical volumes on this node. Required when action is not specified. 10
maxSize Maximum utilized storage capacity (e.g., 500G). Impacts RAM demand. 150g
partitions Number of partitions per device. 1
coreIsolation Enable CPU core isolation. Requires a node restart after deployment. false
corePercentage Percentage of total cores allocated to simplyblock.
pcieAllowList List of allowed NVMe PCIe addresses.
pcieDenyList List of blocked NVMe PCIe addresses.
dataIfname Data network interface names for storage traffic.
socketsToUse Number of NUMA sockets to use.
nodesPerSocket Number of storage nodes per NUMA socket.
workerNodes Worker node names for deployment. Required and must be non-empty when action is not specified.

For a complete list of fields, see Simplyblock Operator.

Creating a Storage Pool

Apply a Pool CRD to create a storage pool:

Example: storage-pool.yaml
apiVersion: simplyblock.simplyblock.io/v1alpha1
kind: Pool
metadata:
  name: my-pool
  namespace: simplyblock
spec:
  name: production-pool
  clusterName: production
  capacityLimit: "10T"
Apply the storage pool resource
kubectl apply -f storage-pool.yaml

Verification

Check the status of all simplyblock resources:

Verify deployment status
kubectl get simplyblockstoragecluster -n simplyblock
kubectl get simplyblockstoragenode -n simplyblock
kubectl get simplyblockpool -n simplyblock
kubectl get pods -n simplyblock

Multi-Cluster Storage Node Support

A single Kubernetes cluster can host storage nodes connected to multiple simplyblock clusters. To configure this, specify the workerNodes field in the StorageNode CRD:

Multi-cluster storage nodes
apiVersion: simplyblock.simplyblock.io/v1alpha1
kind: StorageNode
metadata:
  name: cluster-a-nodes
  namespace: simplyblock
spec:
  clusterName: cluster-a
  workerNodes:
    - worker-a-1
    - worker-a-2
---
apiVersion: simplyblock.simplyblock.io/v1alpha1
kind: StorageNode
metadata:
  name: cluster-b-nodes
  namespace: simplyblock
spec:
  clusterName: cluster-b
  workerNodes:
    - worker-b-1
    - worker-b-2

Warning

The resources consumed by simplyblock are exclusively used and have to be aligned with resources required by other workloads. For further information, see minimum hardware requirements.

Info

The RAM requirement is split between huge page memory and system memory. Simplyblock manages huge page allocation automatically.

The total amount of RAM required depends on the number of vCPUs used, the number of active logical volumes (Persistent Volume Claims or PVCs) and the utilized virtual storage on this node.