IBM beefs up its AI credentials with Power9 systems and new software

0
4

0

summit2.jpg

The installation of the Summit supercomputer.

ORNL

IBM is doubling down on AI: releasing new software to help train machine-learning models and talking up the potential for its new Power9 systems to accelerate intelligent software.

Today IBM unveiled new software that will make it easier to train machine-learning models to take decisions and extract insights from big data.

The Deep Learning Impact software tools will help users develop AI models using popular open-source, deep-learning frameworks, such as TensorFlow and Caffe, and will be added to IBM’s Spectrum Conductor software from December.

Alongside the software reveal, IBM has been talking up new systems based around its new Power9 processor — which are on display at this year’s SC17 event.

IBM says these systems are tailored towards AI workloads, due to their ability to rapidly shuttle data between between Power9 CPUs and hardware accelerators, such as GPUs and FPGAs, commonly used both in training and running machine-learning models.

Power9 systems will have high-bandwidth connections between the Power9 processor and accelerators in the rest of the system, according to IBM, which says Power9 will be the first commercial platform with on-chip support for the latest high-speed connectors, including Nvidia’s next-generation NVLink, OpenCAPI 3.0 and PCI-Express 4.0.

“We see that the era of the on-chip microprocessor — with processing integrated on one chip — is dying as well as Moore’s Law lapsing,” says Brad McCredie, VP and IBM Fellow for Cognitive Systems Development.

“Power9 gives us an opportunity to try new architectural designs to push computing beyond today’s limits by maximizing data bandwidth across the system stack.”

“The bedrock of Power9 is an internal ‘information superhighway’ that decouples processing and empowers advanced accelerators to digest and analyze massive data sets.”

The next-generation Nvidia NVLink and OpenCAPI interconnects will provide significantly faster performance for attached GPUs than offered by the PCI-Express 3.0 connectors commonly used in x86 systems today, while PCI-Express 4.0 interconnects will be twice the speed of PCI-Express 3.0.

The “crown jewel” of new Power9 systems, according to IBM, are the Summit and Sierra supercomputers that are being built for the US Department of Energy, which also use Nvidia’s latest Volta-based Tesla GPU accelerators. The Summit supercomputer is expected to boost application performance by five to 10 times over the DOE’s older Titan supercomputer.

IBM’s focus on laying the groundwork for systems that can efficiently spread processing between many different types of chips is partly a result of work it has done with Google, Mellanox, Nvidia and others in the OpenPower Foundation.

Earlier this year, IBM senior VP Bob Picciano discussed how the firm planned to create systems better able to tackle workloads associated with using AI to analyze unstructured data.

Related Topics:

Data Centers

Digital Transformation

CXO

Internet of Things

Innovation

Enterprise Software

0