Caitlin Ju for The Stanford Daily: Stanford researchers in the Computational Vision and Geometry Lab have designed an autonomously navigating robot prototype that they say understands implicit social conventions and human behavior. Named “Jackrabbot” after the swift but cautious jackrabbit, the visually intelligent and socially amicable robot is able to maneuver crowds and pedestrian spaces.
A white ball on wheels, the Jackrabbot is built on a Segway system and contains a computing unit and multiple sensors that acquire a 3-D structure of the surrounding environment. 360-degree cameras and GPS also enhance the robot’s navigation and detection capabilities.
To interact smoothly in public settings, the robot has to know how to avoid someone in a natural way, how to yield the right-of-way and how to leave personal space, among other skills. Cont'd...
Vert-X 05E Series of dual angle/speed sensors. The series features easy mounting in small and narrow spaces with a 5 mm body depth and mounting flanges with metal inserts. The sensors make measurements only 6 mm from edge of product for close-to-wall measurement applications. Vert-X 05E Series sensors measure angles from 0 to 360°, rotational speed and direction with repeatability to 0.1°.