Data is being created everywhere, by everyone, all around us, all the time – from everyday activity like email, text and other communication – to creating photos and videos on our personal devices – to IoT like smart fridges, pedometers and cars – to business applications like banking, healthcare and manufacturing – the list is absolutely endless. In fact, according to DOMO research, as reported by Forbes, there are 2.5 quintillion bytes of data created each day. Not surprisingly, traditional storage architectures are struggling to keep up with this data explosion, leading IT teams to investigate new solutions to leverage and capitalize on this data boom, as opposed to simply drown.
From a storage standpoint, if one were to take a step back, and look at the big picture – the primary, high-level challenges are understanding performance, removing data throughput bottlenecks and being able to plan for future capacity. Architecture can often lock businesses in to legacy solutions, and performance needs can vary and change as data sets grow.
NVMe-oF represents the next phase in the evolution of data storage technology, paving the way for the arrival of rack-scale flash systems that integrate native end-to-end data management. Architectures designed and built around NVMe-oF and SSDs have gained notoriety of late as they are able to provide the perfect balance, particularly for data-intensive applications that demand fast performance. This is extremely important for organizations that are dependent on speed, accuracy and real-time data insights.
Industries such as healthcare, autonomous vehicles, AI/ML and genomics are at the forefront of the transition to high performance NVMe-oF storage solutions that deliver fast data access for high performance computing systems and applications that drive new research and innovations.
A growing trend in the tech industry is that of autonomous vehicles. Self-driving cars are the next big thing, and various companies are working tirelessly to perfect the idea. In order to function properly, these vehicles need very fast storage to accelerate the applications and data that ‘drive’ autonomous vehicle development. Core requirements for autonomous vehicle storage include:
- Must be able to accept input data from cameras and sensors at “line rate” (have extremely high throughput and low latency)
- Must have a high capacity in a small form factorMust be robust and survive media or hardware failuresMust be easily removable and reusable
- Must use simple but robust networking
- Must be “green” and have minimal power footprint
What kind of storage meets all these requirements? You guessed it: NVMe-oF!
With traditional storage architectures, detailed genome analysis can take as much as five days or more to complete – which makes sense considering an initial analysis of one person’s genome produces approximately 300GB-1TB of data, and a single round of secondary analysis on just one person’s genome can require upwards of 500TB storage capacity. However, with an NVMe solution implemented it’s possible to get results in just one day.
In a standard study, genome research and life sciences companies need to process, compare and analyze the genomes of between 1,000 and 5,000 people per study. This is a mammoth amount of data to store, but it’s imperative that it’s done. Such studies are progressing toward revolutionary scientific and medical advances, looking to personalize medicine and provide advanced cancer treatments. This is only now becoming possible thanks to the speed that NVMe-oF enables researchers to explore and analyze the human genome.
Read More: Chatbots are the New HR Managers
Artificial Intelligence (AI) is gaining a great deal of traction in a number of industries – from financial to manufacturing, and beyond. In financial, AI does things like predict investment trends. In manufacturing, AI-based image recognition software checks for defects during product assembly. Wherever it’s used, AI applications require a high level of computing power, coupled with a high-performance and low-latency architecture in order to enable parallel processing power of data in real-time.
NVMe-oF steps up to the plate, providing the speed and processing power that is critical during training and inference. Without this to prevent bottlenecks and latency issues, these stages can take much, much longer. Which, in turn, can lead to the temptation to take shortcuts, causing software to malfunction or make incorrect decisions down the line.
Another key use case for NVMe-oF storage is when it is combined with ultra-fast GPU systems such as the NVIDIA DGX2. Presenting NVMe-oF storage to the DGX2, via RoCE or InfiniBand, can push the GPUs in the system to the highest possible utilization. This speeds up AI/ML and Genomics workloads and makes slower storage, such as NFS, more effective. NVMe-oF storage can be thought of as a “burst buffer” for the NVIDIA DGX2 or other GPU based systems.
The Future is Now
The rapid increase of data creation and the applications that are leveraged to capitalize on it have put traditional storage architectures under high pressure due to lack of scalability and flexibility – both of which are required to fulfill future capacity and performance requirements. This is where NVMe-oF comes in, breaking the barriers of existing designs by offerings unanticipated density and performance. These breakthroughs contain the requirements needed to help manage and maintain the data boom, today and into the future.
Read More: What’s the Next Game AI Will Solve?