Cray Inc release new supercomputers to power next gen AI

The tremendous advances in AI in the last few years have largely been driven by both the volume of data available to train algorithms on, and the ability of computers to process that data quickly and efficiently.  This trend has come to its natural conclusion in recent times with the development of processors specifically designed for AI.

For instance, earlier this year saw supercomputing giant Cray Inc. enter into a deep learning collaboration with Microsoft and the Swiss National Supercomputing Centre.  The project aimed to improve the ability of companies to run deep learning algorithms at scale.  The partnership worked to leverage their collective computing expertise to scale up the Microsoft Cognitive Toolkit onto a Cray XC50 supercomputer.

Powering AI

Cray have continued to develop machinery for AI with the launch of two new Cray® CS-Storm™ accelerated cluster supercomputers – the Cray CS-Storm 500GT and the Cray CS-Storm 500NX.  The machines are developed specifically for AI processing to allow customers to run machine learning and deep learning applications.

The computers are equipped with NVIDIA Tesla GPU accelerators and add to the portfolio of machines offered by Cray to give customers a choice of accelerated supercomputers for heavy duty, data-intensive work.

“Customer demand for AI-capable infrastructure is growing quickly, and the introduction of our new CS-Storm systems will give our customers a powerful solution for tackling a broad range of deep learning and machine learning workloads at scale with the power of a Cray supercomputer,” Cray say. “The exponential growth of data sizes, coupled with the need for faster time-to-solutions in AI, dictates the need for a highly-scalable and tuned infrastructure.”

The system provides up to 187 TOPS (tera operations per second) per node, 2,618 TOPS per rack for machine learning application performance, and up to 658 double precision TFLOPS per rack for HPC application performance.

“Early adopters of big data analytics and AI have learned a painful lesson as they have struggled to scale their applications and keep pace with data growth and use more sophisticated models,” said Shahin Khan, founding partner at OrionX Research, “You must have the right systems from the beginning to be able to scale, otherwise inefficiencies accumulate and multiply. Expertise in large scale system design and application optimization is critical. That’s an area that Cray has led for decades.”

Facebooktwittergoogle_plusredditpinterestlinkedinmail