An alternative to NetApp, Vast, Infinite and other US storage arrays. This is the argument put forward by the almost European manufacturer NGX to promote its equipment that can be used by network machines both in NAS (NFS or SMB) and in block mode (FC or iSCSI).
NGX is a Turkish company that claims to mainly sell its products in Europe adapted to all performance needs: data is stored on hard drives after going through different levels of cache – on SSDs, on modules Intel Optan and in RAM – the amount of which can be allocated based on applications.
“Banks use our products as a SAN for their databases and as a NAS for their developers. Hosts use block mode for virtual machine disk images and file mode for storing videos. Research institutes tend to use file mode for their VMs below OpenStack and blocking mode for shared volumes in Luster between nodes of their compute clusters,” explains Ali Kemal Yurtseven, CEO of NGX.
“We will not claim to be cheaper than American storage brands. But our production is local, responsive, with no problems of delivery to our European customers. We are open to requests for improvement and, in fact, we believe our racks are now among the easiest to use. They are operational five minutes after being unpacked,” he says.
Met at an IT Press Tour event focusing on European storage playersAli Kemal Yurtseven explained to MagIT that he is currently looking for resellers in Western Europe, having established many contracts in the Middle East.
Massive use of cache
Technically, NGX berries are split. On the network, sharing nodes (the “controllers”) distribute access and communicate data to storage nodes, to which they are all connected via an Ethernet RoCE switch carrying a protocol Infinibanda. From a controller perspective, storage nodes are like their own internal SSDs. And if these storage nodes are filled with SSDs or hard drives.
This architecture allows you to grow this or that component as needed: more controllers to maximize parallel access, more SSD shelves to maximize speed, or more disk shelves to maximize capacity. According to NGX, for example, 20 PB of storage can be acquired by implementing just two controllers. Each controller communicates its data over the network at 9 Gbps and with millisecond latency, over both 100 Gbps Ethernet and 32 Gbps FC links.
The controllers make heavy use of their RAM and Optane modules to store metadata which helps find blocks of data faster. In this case a volume will not be assigned to a disk or SSD in a drawer, but fragmented among several drawers in order to limit latency. Also, editing a block will create a new block in a free place, then delete the previous block when there is enough processing time available. This system also acts as a RAID; requires at least three storage nodes.
Other controller functions include compression, deduplication, and Thin provisioning all of this happens in real time. There are also snapshot functions.
On the storage node side, those made up of hard drives integrate a cache on SAS SSDs, while those containing SAS SSDs have a cache on NVMe SSDs. Writes flow from one cache to another, while the system evaluates which blocks should remain in one type of storage or another, depending on the application used and the likelihood that these blocks need to be replayed often.
Coming soon from SDS, NVMe-oF and object
Since it was founded in 2015 in Ankara, NGX has sold 200 storage clusters and its annual turnover is now close to 20 million euros.
Above all, NGX explains that it has many projects in the pipeline. It will soon reject its solution in a software-only form – an SDS for short – potentially usable from a cloud. “From the beginning of this year 2023, you will find us in AWS, Amazon or Google,” promises Ali Kemal Yurtseven.
Later this year, NGX is expected to release a version of its offering that supports use as a SAN. NVMe-over-Fabrics (NVMe-oF). This will likely be NVMe/TCP, for Ethernet deployments, then NVMe/FC for those using a Fiber Channel network. Based on libraries provided by Intel, this new protocol may come with drivers CXLwhich would allow NGX clusters to reclaim cache memory elsewhere on the network.
Finally, the solution should soon directly integrate object mode operation. Currently, NGX array deployments that required S3 access to data were accompanied by S3 gateways provided by MiniIO.