Inspur's Secrets Unveiled Behind Baidu's Driverless Car Technology

As the pioneer in artificial intelligence field, Baidu chose the Inspur NF5568M4 heterogeneous supercomputing server in its unmanned auto road condition model training.

BEIJING, July 21, 2016 /PRNewswire/ -- Artificial intelligence has advanced through the years and voice recognition, intelligent hardware, and driverless cars are all technologies that influence our lives. Behind artificial intelligence technology is a neural network that is built from deep learning -- mimicking mechanisms of the human brain when interpreting data. In order to meet all the latest deep learning requirements, a high-performance CPU + GPU co-processing acceleration server is growing to become the essential foundation for artificial intelligence hardware.


As the pioneer in artificial intelligence field, Baidu chose the Inspur NF5568M4 heterogeneous supercomputing server in its unmanned auto road condition model training.

The 4U4 card design of Inspur NF5568M4 is applicable to present electric power and heat dissipation designs of the data center, and is scalable to multi-machine and multi-card CPU computing clusters via the open-source Inspur Caffe-MPI becoming the mainstream CPU server used presently in the internet industry. Currently, Inspur's deep learning solution is being applied at Tencent, Baidu, Alibaba, Qihoo, iFLYTEK and JD and is supporting the "super brains" of various types of intelligentized services.

As the neural network model grows in complexity, the computing performance necessary for the model training increases dramatically. The cluster-edition Caffe-MPI computing frame launched by Inspur achieves parallel computing of GPU server. It adopts high-performance mature MPI technology in computing -- carrying out parallel data optimization to the Caffe edition -- with the ability to organize multiple NF5568M4 into CPU parallel computing clusters via IB network. Actual measurements show that a 16-card CPU cluster consisting of four NF5568M4 has increased performance by 13 times over a single-card -- enabling node extension efficiency to over 90%. It also surpasses the 4-card machine in stability and heat dissipation, achieving high-performance in multi-machine and multi-card CPU computing clusters -- satisfying the high-duty parallel computing demands of clients.

Inspur Caffe-MPI also supports the cuDNN library. By using the GPU acceleration for the deep learning network, developers are able to assemble a more advanced machine learning framework, thus accelerating its deep learning project and product development work.

Featured Product

PI USA - 7 Reasons Why Air Bearings Outperform Mechanical Bearings

PI USA - 7 Reasons Why Air Bearings Outperform Mechanical Bearings

Motion system designers often ask the question whether to employ mechanical bearings or air bearings. Air bearings deserve a second look when application requirements include lifetime, precision, particle generation, reproducibility, angular accuracy, runout, straightness, and flatness.