Caitlin Ju for The Stanford Daily: Stanford researchers in the Computational Vision and Geometry Lab have designed an autonomously navigating robot prototype that they say understands implicit social conventions and human behavior. Named “Jackrabbot” after the swift but cautious jackrabbit, the visually intelligent and socially amicable robot is able to maneuver crowds and pedestrian spaces.
A white ball on wheels, the Jackrabbot is built on a Segway system and contains a computing unit and multiple sensors that acquire a 3-D structure of the surrounding environment. 360-degree cameras and GPS also enhance the robot’s navigation and detection capabilities.
To interact smoothly in public settings, the robot has to know how to avoid someone in a natural way, how to yield the right-of-way and how to leave personal space, among other skills. Cont'd...
Sony Establishes 'Research Award Program' for North American Universities to Accelerate Innovation and New Technologies
Now Available through North American Distributors: The Most Advanced Autonomous Mobile Robot for Indoor Transportation Logistics - The MiR100
Records 76 to 90 of 1320