nvme, nvm and iser use cases

How to install NVMStack

Install NVMStack Software Defined Storage is very easy, after you request the NVMStack install package from us, you may follow the steps to install it.

1. Enter the path of the install package:

cd nvmstack

2. Run the installer:

./install.sh

3. Waiting for finish, it will take in less than 1 minute:

Installing nvmstackd…
**********************setup nvmstack************************
Copying files…
Installing WEB management platform…
Setting up service…
Starting service…
Starting nvmstackd (via systemctl): [ OK ]
***********************finished*****************************

4. Launch WEB management platform:

Launch any Internet Browser and navigate to the URL:

http://server_ip/

Default credentials are:

root
nvmstack

Done.

Using command line iSER DEMO to test iSER target performance

From the package of us, there are few demo utils to create a single pooled storage to demonstrate storage features and performance. Through the way, you can quick preview the favourited features in minutes.

For iSER target, you may run the following command to start:

./nvmf_test -p iser spool 1 2 /dev/sdc pci://0000:04:00.0 pci://0000:05:00.0
outputs are:
found 1 ib devices
device0: mlx4_0, uverbs0, /sys/class/infiniband_verbs/uverbs0, /sys/class/infiniband/mlx4_0
copies: 1/2
create name space: iqn.2018-03.com.nvmstack:volume0…done.
iSCSI over RDMA service is started.

Read more

Using command line NVMe-oF DEMO to test off-load NVMe-oF target performance

From the package of us, there are few demo utils to create a single pooled storage to demonstrate storage features and performance. Through the way, you can quick preview your favorite features in minutes.

For NVMe-oF target, you may run the following command to start:

./nvmstack_demo -d nvmf spool 1 3 /dev/sdc pci://0000:04:00.0 pci://0000:05:00.0 pci://0000:05:00.0
outputs are:
found 1 ib devices
device0: mlx4_0, uverbs0, /sys/class/infiniband_verbs/uverbs0, /sys/class/infiniband/mlx4_0
copies: 1/2
create name space: nqn.2018-03.com.nvmstack:0…done.
NVMe over Fabric service is started.
You can also use the following command line to start RAID pool:
./nvmstack_demo -d nvmf raid0 3 pci://0000:04:00.0 pci://0000:05:00.0 pci://0000:05:00.0

Read more

Configure NVMe-oF device with Multipath

Requirements:

Linux Kernel 4.8 and newer.

Package: multipath, nvme-cli

Use the ways to install them:

#yum install device-mapper-multipath nvme-cli

Read more

Using Linux nvme-cli to connect to the NVMStack’s NVMe-oF targets

Requirements

To using Linux NVMe over Fabrics client, need Linux with kernel 4.08 and above.

Install the nvme-cli package on the client machine:

#yum install nvme-cli

or on Debian:

#apt-get install nvme-cli

Startup Kernel Module

#modprobe nvme-rdma

Discover NVMe-oF subsystems

nvme discover -t rdma -a 192.168.80.111 -s 4420
Discovery Log Number of Records 1, Generation counter 1

=====Discovery Log Entry 0======

trtype: rdma

adrfam: ipv4

subtype: nvme subsystem

treq: not specified

portid: 1

trsvcid: 4420

subnqn:nqn.2016-12.com.nvmstack:test-volume0

traddr: 192.168.80.111

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms: rdma-cm

rdma_pkey: 0x0000

Connect to NVMe-oF subsystems

nvme connect -t rdma -n nqn.2016-12.com.nvmstack:test-volume0 -a 192.168.80.111 -s 4420

List nvme device info:

#nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 NVMStack Controller 160.11 GB / 160.11 GB 512 B + 0 B A348BB834CD3544

Disconnect NVMe-oF subsystems

In order to disconnect from the target run the nvme disconnect command:

#nvme disconnect -d /dev/nvme0n1