From Nvidia's CES press conference:
The DRIVE PX platform is based on the NVIDIA® Tegra® X1 processor, enabling smarter, more sophisticated advanced driver assistance systems (ADAS) and paving the way for the autonomous car.
Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. It is equipped with 10 GB of DRAM memory and combines surround Computer Vision (CV) technology, extensive deep learning training, and over-the-air updates to transform how cars see, think, and learn.
DEEP LEARNING COMPUTER VISION
Conventional ADAS technology today can detect some objects, do basic classification, alert the driver, and in some cases, stop the vehicle. DRIVE PX takes this to the next level with the ability to differentiate an ambulance from a delivery truck or a parked car from one about to pull into traffic. The system can now inform the driver, not just get their attention with a warning. The car is not just sensing, but interpreting what is taking place around it—an essential capability for auto-piloted driving... (more info)
From Empire Robotics:
The VERSABALL is a squishy balloon membrane full of loose sub-millimeter particles. The soft ball gripper easily conforms around a wide range of target object shapes and sizes. Using a process known as “granular jamming”, air is quickly sucked out of the ball, which vacuum-packs the particles and hardens the gripper around the object to hold and lift it. The object releases when the ball is re-inflated. VERSABALL comes in multiple head shapes and sizes that use the same pneumatic base... (Empire Robotics' site)
From Yezhou Yang, Yi Li, Cornelia Fermuller and Yiannis Aloimonos:
In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation.
The list of the grasping types.
Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.... (article at Kurzweilai.net) (original paper)
2015CES - INMOTION to Showcase New Self-balancing Smart Vehicles at the 2015 Consumer Electronics Show
Records 1891 to 1905 of 6357