Using command line NVMe-oF DEMO to test off-load NVMe-oF target performance

From the package of us, there are few demo utils to create a single pooled storage to demonstrate storage features and performance. Through the way, you can quick preview your favorite features in minutes.

For NVMe-oF target, you may run the following command to start:

./nvmstack_demo -d nvmf spool 1 3 /dev/sdc pci://0000:04:00.0 pci://0000:05:00.0 pci://0000:05:00.0
outputs are:
found 1 ib devices
device0: mlx4_0, uverbs0, /sys/class/infiniband_verbs/uverbs0, /sys/class/infiniband/mlx4_0
copies: 1/2
create name space: nqn.2018-03.com.nvmstack:0…done.
NVMe over Fabric service is started.
You can also use the following command line to start RAID pool:
./nvmstack_demo -d nvmf raid0 3 pci://0000:04:00.0 pci://0000:05:00.0 pci://0000:05:00.0

Using built-in NVMe over Fabric initiator in Linux

 

Install the nvmf-cli package:

Cent OS: yum install -y nvme-cli
Ubuntu: apt-get install nvme-cli

load the kernel modules:

modprobe nvmet
modprobe nvmet-rdma
modprobe nvme-rdma

 

Discover and connect:

nvme discover -t rdma -a 192.168.8.80 -s 4420
nvme connect -t rdma -n nvme-subsystem-name -a 192.168.8.80 -s 4420
Then you will able to test performance on the new created NVMe device.
Note: The virtual NVMe device is created by the Linux built-in kernel mode NVMf driver, the performance will be limited on this, will not represent to the server’s throughput.

Using built-in NVMf initiator in Linux

We can use user mode perf tool (third-party one) to check performance of the NVMf target:

perf -q 64 -s 4096 -w randrw -M 100 -t 30 -r ‘trtype:RDMA adrfam:IPv4 traddr:192.168.8.80 trsvcid:4420’ -l -c 0x2

Testing environment and results

CPU: E52680

Memory: 64G

NVMe: Intel NVMe 3700 * 2, other NVMe * 1 for metadata.

NIC:  Mellanox ConnectX-3 40GB

PERFORMANCE: around 1170,000 IOPS in Read, 690,000 IOPS in Write.