NVIDIA GPU ACCELERATORS GET A DIRECT PIPE TO BIG DATA

Nvidia has unveiled GPUDirect Storage, a new capability that enables its GPUs to talk directly with NVM-Express storage. The technology uses GPUDirects RDMA facility to transfer data from flash storage into the GPUs local memory without needing to involve the host CPU and system memory. The move is part of the companys strategy to expand its reach in data science and machine learning applications.If successful, Nvidia could largely edge out CPUs from yet another fast-growing application area. The data science and machine learning market for servers is thought to be a $20 billion to $25 billion per year opportunity, which is about the same size as the combined HPC and deep learning server market. Essentially, Nvidia is looking to double its application footprint in the datacenter.That expansion strategy began in earnest last October, when it introduced RAPIDS, a suite of open source tools and libraries for supporting GPU-powered analytics and machine learning. In a nutshell, RAPIDS added support for GPU acceleration in Apache Arrow, Spark, and other elements of the data science toolchain. It was designed to bring GPUs into the more traditional world of big data enterprise applications, which up until now has been dominated by CPU-based clusters using things like Hadoop and MapReduce.According to Josh Patterson, Nvidias new general manager of data science, RAPIDS encompassed all of machine learning, both supervised and unsupervised, as well as data processing. That was met with some skepticism from the traditional enterprise crowd. I think the data processing part was what caught people off guard,Patterson tells The Next Platform.

Spotlight

Other News

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More