Runs locally on to-be storage node hosts. Installs storage node dependencies and prepares it to be used as a storage node. Only required, in standalone deployment outside of Kubernetes.
The network interface to be used for communication between the control plane and the storage node.
string
False
-
--isolate-cores
Isolate cores in kernel args for provided cpu mask
marker
False
False
Prepare a configuration file to be used when adding the storage node
Runs locally on to-be storage node hosts. Reads system information (CPUs topology, NVME devices) and prepares yaml config to be used when adding the storage node.
Maximum amount of GB to be utilized on this storage node. This cannot be larger than the total effective cluster capacity. A safe value is 1/n * 2.0 of effective cluster capacity. Meaning, if you have three storage nodes, each with 100 TiB of raw capacity and a cluster with erasure coding scheme 1+1 (two replicas), the effective cluster capacity is 100 TiB * 3 / 2 = 150 TiB. Setting this parameter to 150 TiB / 3 * 2 = 100TiB would be a safe choice.
string
True
-
--nodes-per-socket
number of each node to be added per each socket.
integer
False
1
--sockets-to-use
System socket to use when adding storage nodes. Comma-separated list: e.g. 0,1
string
False
0
--pci-allowed
Storage PCI addresses to use for storage devices(Normal address and full address are accepted). Comma-separated list: e.g. 0000:00:01.0,00:02.0
string
False
--pci-blocked
Storage PCI addresses to not use for storage devices(Normal address and full address are accepted). Comma-separated list: e.g. 0000:00:01.0,00:02.0
string
False
Upgrade the automated configuration file with new changes of cpu mask or storage devices
Regenerate the core distribution and auto calculation according to changes in cpu_mask and ssd_pcis only
sbctlstorage-nodeconfigure-upgrade
Cleans a previous simplyblock deploy (local run)
Run locally on storage nodes and control plane hosts. Remove a previous deployment to support a fresh scratch-deployment of cluster software.
1: auto-create small partitions for journal on nvme devices. 0: use a separate (the smallest) nvme device of the node for journal. The journal needs a maximum of 3 percent of total available raw disk space.
integer
False
1
--data-nics
Storage network interface name(s). Can be more than one. Comma-separated list: e.g. eth0,eth1
string
False
-
--ha-jm-count
HA JM count
integer
False
3
--namespace
Kubernetes namespace to deploy on
string
False
-
Deletes a storage node object from the state database.
Deletes a storage node object from the state database. It must only be used on clusters without any logical volumes. Warning: This is dangerous and could lead to unstable cluster if used on active cluster.
sbctlstorage-nodedelete
<NODE_ID>
--force
Argument
Description
Data Type
Required
NODE_ID
Storage node id
string
True
Parameter
Description
Data Type
Required
Default
--force
Force delete storage node from DB...Hopefully you know what you do
marker
False
-
Removes a storage node from the cluster
The storage node cannot be used or added any more. Any data residing on this storage node will be migrated to the remaining storage nodes. The user must ensure that there is sufficient free space in remaining cluster to allow for successful node removal.
Danger
If there isn't enough storage available, the cluster may run full and switch to read-only mode.
A storage node is required to be offline to be restarted. All functions and device drivers will be reset as a result of the restart. New physical devices can only be added with a storage node restart. During restart, the node will not accept any I/O.
Allows to restart an existing storage node on new host or hardware. Devices attached to storage nodes have to be attached to new hosts. Otherwise, they have to be marked as failed and removed from cluster. Triggers a pro-active migration of data from those devices onto other storage nodes.
The provided value must be presented in the form of IP:PORT. Be default the port number is 5000.
string
False
-
--force
Force restart
marker
False
-
--ssd-pcie
New Nvme PCIe address to add to the storage node. Can be more than one.
string
False
--force-lvol-recreate
Force LVol recreate on node restart even if lvol bdev was not recovered
marker
False
False
Initiates a storage node shutdown
Once the command is issued, the node will stop accepting IO,but IO, which was previously received, will still be processed. In a high-availability setup, this will not impact operations.
sbctlstorage-nodeshutdown
<NODE_ID>
--force
Argument
Description
Data Type
Required
NODE_ID
Storage node id
string
True
Parameter
Description
Data Type
Required
Default
--force
Force node shutdown
marker
False
-
Suspends a storage node
The node will stop accepting new IO, but will finish processing any IO, which has been received already.
list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total-, format: XXdYYh
string
False
-
Lists storage devices
Lists storage devices
sbctlstorage-nodelist-devices
<NODE_ID>
--json
Argument
Description
Data Type
Required
NODE_ID
Storage node id
string
True
Parameter
Description
Data Type
Required
Default
--json
Print outputs in json format
marker
False
-
Gets storage device by its id
Gets storage device by its id
sbctlstorage-nodeget-device
<DEVICE_ID>
Argument
Description
Data Type
Required
DEVICE_ID
Device id
string
True
Resets a storage device
Hardware device reset. Resetting the device can return the device from an unavailable into online state, if successful.
sbctlstorage-nodereset-device
<DEVICE_ID>
Argument
Description
Data Type
Required
DEVICE_ID
Device id
string
True
Restarts a storage device
A previously logically or physically removed or unavailable device, which has been re-inserted, may be returned into online state. If the device is not physically present, accessible or healthy, it will flip back into unavailable state again.
sbctlstorage-noderestart-device
<DEVICE_ID>
Argument
Description
Data Type
Required
DEVICE_ID
Device id
string
True
Adds a new storage device
Adds a device, including a previously detected device (currently in "new" state) into cluster and launches an auto-rebalancing background process in which some cluster capacity is re-distributed to this newly added device.
sbctlstorage-nodeadd-device
<DEVICE_ID>
Argument
Description
Data Type
Required
DEVICE_ID
Device id
string
True
Logically removes a storage device
Logical removes a storage device. The device will become unavailable, irrespectively if it was physically removed from the server. This function can be used if auto-detection of removal did not work or if the device must be maintained while remaining inserted into the server.
Sets a storage device to state failed. This command can be used, if an administrator believes that the device must be replaced. Attention: a failed state is final, meaning, all data on the device will be automatically recovered to other devices in the cluster.
list history records -one for every 15 minutes- for XX days and YY hours -up to 10 days in total, format: XXdYYh
string
False
-
Checks the health status of a storage node
Verifies if all of the NVMe-oF connections to and from the storage node, including those to and from other storage devices in the cluster and the meta-data journal, are available and healthy and all internal objects of the node, such as data placement and erasure coding services, are in a healthy state.