NVMStack released high performance optimized SDS stack for hybrid or all flash arrays

NVMStack, the global recognized in high performance software defined storage period, has announced a new release of it’s software stack for Flash Arrays.

NVMStack brings its own developed core service which is a high performance optimized, lock-less scheduler (like a OS) to manage all resources in the server includes:
CPU, memory, storage, PCI devices and numa, which is working in polling mode and make full usage of hardware resource to perform maximum performance for storage service.

NVMStack brings user mode NVMe driver (Kernel-by-pass) to support directly attached, RAID pool and SDS Pool, SDS Pool allows to create multiple volumes to be exposed as high availability, snapshot enabled iSCSI target, iSER target or NVMe-oF target.

 

Key features:

100% SDS Pool which can manage all disks and make them as pooled with the ability to create arbitrary, dynamic block volumes with unlimited zero-copies snapshot enabled.
Polling mode server pool, listening to multiple NICs and ports.
Protocol Support: iSCSI (TCP), iSER (iSCSI Extension for RDMA) and NVMe-oF (NVMe over Fabric).
High Availability and Remote mirror: All interfaces (iSCSI, iSER, and NVMe-oF) support HA.
Kernel-by-pass, completely kernel-by-pass and zero data copy in I/O path (except latency disk support).
Non-SDS Pool, providing directly attached, RAID (0, 1, 5) mode storage pool.
Legacy device support, support for SATA/SAS HDD and SSD.
Data safety, strong consistency I/Os return back after safely placed in Disks.
Easy Management, providing easy-to-use, all-in-on and centralized WEB management platform.

Almost no limitation to use NVMStack, NVMStack can be working on All Flash Array, and as well as working on traditional SATA/SAS arrays, user can use even 1-2 NVMe to get the benefits of the kernel-by-pass performance.

NVMStack software defined storage is now available for End-Users and OEM partners in all around of the world.

NVMStack Release Notes

Release Version: NVMStack 2019 V1, Release Date: 12/03/2018

Release Feature:

  • Storage Network: iSCSI, iSER and NVMe over Fabric
  • Server Pool: Polling Mode
  • Storage Pool: Software Defined Storage Pool and RAID Pool
  • Snapshots: Unlimited and Zero-Copies Snapshots
  • Replication: High Availability and Replication over RDMA or TCP
  • Web Management: Manage multiple servers in one platform

How to install NVMStack

Install NVMStack Software Defined Storage is very easy, after you request the NVMStack install package from us, you may follow the steps to install it.

1. Enter the path of the install package:

cd nvmstack

2. Run the installer:

./install.sh

3. Waiting for finish, it will take in less than 1 minute:

Installing nvmstackd…
**********************setup nvmstack************************
Copying files…
Installing WEB management platform…
Setting up service…
Starting service…
Starting nvmstackd (via systemctl): [ OK ]
***********************finished*****************************

4. Launch WEB management platform:

Launch any Internet Browser and navigate to the URL:

http://server_ip/

Default credentials are:

root
nvmstack

Done.

Using command line iSER DEMO to test iSER target performance

From the package of us, there are few demo utils to create a single pooled storage to demonstrate storage features and performance. Through the way, you can quick preview the favourited features in minutes.

For iSER target, you may run the following command to start:

./nvmf_test -p iser spool 1 2 /dev/sdc pci://0000:04:00.0 pci://0000:05:00.0
outputs are:
found 1 ib devices
device0: mlx4_0, uverbs0, /sys/class/infiniband_verbs/uverbs0, /sys/class/infiniband/mlx4_0
copies: 1/2
create name space: iqn.2018-03.com.nvmstack:volume0…done.
iSCSI over RDMA service is started.

Read more

Using command line NVMe-oF DEMO to test off-load NVMe-oF target performance

From the package of us, there are few demo utils to create a single pooled storage to demonstrate storage features and performance. Through the way, you can quick preview your favorite features in minutes.

For NVMe-oF target, you may run the following command to start:

./nvmstack_demo -d nvmf spool 1 3 /dev/sdc pci://0000:04:00.0 pci://0000:05:00.0 pci://0000:05:00.0
outputs are:
found 1 ib devices
device0: mlx4_0, uverbs0, /sys/class/infiniband_verbs/uverbs0, /sys/class/infiniband/mlx4_0
copies: 1/2
create name space: nqn.2018-03.com.nvmstack:0…done.
NVMe over Fabric service is started.
You can also use the following command line to start RAID pool:
./nvmstack_demo -d nvmf raid0 3 pci://0000:04:00.0 pci://0000:05:00.0 pci://0000:05:00.0

Read more

Configure NVMe-oF device with Multipath

Requirements:

Linux Kernel 4.8 and newer.

Package: multipath, nvme-cli

Use the ways to install them:

#yum install device-mapper-multipath nvme-cli

Read more

Using Linux nvme-cli to connect to the NVMStack’s NVMe-oF targets

Requirements

To using Linux NVMe over Fabrics client, need Linux with kernel 4.08 and above.

Install the nvme-cli package on the client machine:

#yum install nvme-cli

or on Debian:

#apt-get install nvme-cli

Startup Kernel Module

#modprobe nvme-rdma

Discover NVMe-oF subsystems

nvme discover -t rdma -a 192.168.80.111 -s 4420
Discovery Log Number of Records 1, Generation counter 1

=====Discovery Log Entry 0======

trtype: rdma

adrfam: ipv4

subtype: nvme subsystem

treq: not specified

portid: 1

trsvcid: 4420

subnqn:nqn.2016-12.com.nvmstack:test-volume0

traddr: 192.168.80.111

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms: rdma-cm

rdma_pkey: 0x0000

Connect to NVMe-oF subsystems

nvme connect -t rdma -n nqn.2016-12.com.nvmstack:test-volume0 -a 192.168.80.111 -s 4420

List nvme device info:

#nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 NVMStack Controller 160.11 GB / 160.11 GB 512 B + 0 B A348BB834CD3544

Disconnect NVMe-oF subsystems

In order to disconnect from the target run the nvme disconnect command:

#nvme disconnect -d /dev/nvme0n1