Converging Workflows Pushing Converged Software onto HPC Platforms

Are we witnessing the convergence of HPC, big data analytics, and AI? Once, these were separate domains, each with its own system architecture and software stack, but the data deluge is driving their convergence. Traditional big science HPC is looking more like big data analytics and AI, while analytics and AI are taking on the flavor of HPC. The data deluge is real. In 2018, CERN’s Large Hadron Collider generated over 50 petabytes (1,000 terabytes, or 1015 bytes) of data, and expects that to increase tenfold by 2025. The average Internet user generates over a GB of data traffic every day; smart hospitals over 3,000 GB per day; a manufacturing plant over 1,000,000 GB per day. A single autonomous vehicle is estimated to generate 4,000 GB per day. Every day. The total annual digital data output is predicted to reach or exceed 163 zettabytes (one sextillion, or 1021 bytes) by 2025. This is data that needs to be analyzed at near-real-time speed and stored somewhere for easy access by multiple collaborators. Extreme performance, storage, networking sounds a lot like HPC.What characterized “traditional” HPC was achieving extreme performance on computationally complex problems, typically simulations of real-world systems (think explosions, oceanography, global weather hydrodynamics, even cosmological events like supernovae, etc.). This meant very large parallel processing systems with hundreds, even thousands, of dedicated compute nodes and vast multi-layer storage appliances, over vast high-speed networks.

Spotlight

Other News

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More

Dom Nicastro | April 03, 2020

Read More