From comma.ai: Last week, we open sourced an advanced driver assistance system in order to help accelerate the future of self driving cars and provide a platform anyone can build on top of. We released both openpilot, driving agent research software, and NEO, a robotics platform capable of running openpilot, under the MIT license. openpilot is an open source adaptive cruise control and lane keeping assist system, both safety features available on modern cars. We would like to build the best ones on the market, and help you retrofit them to existing cars. NEO is an open source robotics research platform. It is centered around an Android phone, similar to Android Based Robots. The modern smartphone is an incredible platform packed with sensors and processing power. NEO also includes a cooling solution and a CAN interface board. CAN is a networking protocol used in cars, trucks, power wheelchairs, golf carts, and many other robotics applications. With a forthcoming openpilot release, it will become easier for researchers to add support for their own vehicle. On older cars, some actuators may be harder to control than others, but it should be very possible to control the gas electronically to have a gas only adaptive cruise control. It's also possible for researchers to add mechanical actuators for the controls that cannot be electronically actuated. Have fun, be safe, and let's usher in the future of self driving cars together... (Github repo) (Interview)
Glenn McDonald for Seeker: Want to know what drones of the future will look like? So does David Lentink, editor of Interface Focus, a journal that, as its title suggests, looks at the interface of different scientific disciplines. Each issue zeroes in on a particular intersection of physical sciences and life sciences and invites the world's top scholars to publish their latest work. The latest issue of Interface Focus brings together biologists and engineers to discuss a topic that's relatively straightforward and, well, pretty empirically cool: "It's completely focused on how animals fly and how that can help us build flying robots," said Lentink, assistant professor of mechanical engineering at Stanford. Can't argue with that. The new issue features 18 newly published papers on various ways that engineers are borrowing ideas from nature to make the next generation of drones and aerial robots. Several of the papers detail prototype drones that have already been built and tested. Cont'd...
From Alexander Shtuchkin: Code & schematics for position tracking sensor using HTC Vive's Lighthouse system and a Teensy board. General purpose indoor positioning sensor, good for robots, drones, etc. 3d position accuracy: currently ~10mm; less than 2mm possible with additional work. Update frequency: 30 Hz Output formats: Text; Mavlink ATT_POS_MOCAP via serial; Ublox GPS emulation (in works) HTC Vive Station visibility requirements: full top hemisphere from sensor. Both stations need to be visible. Positioning volume: same as HTC Vive, approx up to 4x4x3 meters. Cost: ~$10 + Teensy 3.2 ($20) (+ Lighthouse stations (2x $135)) Skills to build: Low complexity soldering; Embedded C++ recommended for integration to your project. (Github page)
The insect drone takes on the functions of larger UAVs, but reduces the larger drones down into a miniature undetectable device.
A fully autonomous automobile is able to decide whether it can safely enter an intersection. It is able to decide how to maneuver around other vehicles, people, and other moving objects.
The team wanted to use the robot to transform complex 3D designs from concept to physical reality in machine foam and other soft materials.
Because of their enclosed structure with all the cables routed internally, the robots are ideally equipped to handle even the most extreme conditions.
The world's biggest robots ever made were unveiled by Japanese company On-Art Corp., wants to use them to build a tourist park called Dino -A-Live.
The 4 in the 4D represents the additional level of visual process information available, "Making the invisible - visible." This means that on the screen the user receives 3D visualization of the robot's movement.
UL and Shell collaborate on a robot that performs inspections in the most challenging environments.
Going Where No Man Has Gone Before: What Does the Future Hold for Automation in the Service Industry?
Most service organisations are still at the stage of small scale trial RPA deployments summarised as "if x is true, then click button y". So how do we go on our voyage of discovery, and move from where we are now to a more automated enterprise?
Unlike other solutions, AImotive's full stack software uses the power of artificial intelligence to "see" fine detail and predict behavior, making it easier to manage common driving concerns such as poor visibility and adverse conditions.
Deep-Domain Conversational AI describes the AI technology which is required to build voice and chat assistants which can demonstrate deep understanding of any knowledge domain.
While self-driving cars get most of the credit for capturing the public's imagination, autonomous or nearly autonomous tractor-trailers are starting to move goods across the world's highways.
The development of driverless cars brings a wide set of capabilities together. From secure cloud-based mapping to next generation communication systems, there are many new ways to provide navigation for driverless systems.
Records 226 to 240 of 476
Dorner's 2200 Series Precision Move Pallet Systems are ideal for assembly automation. With features such as an innovative timing belt conveyor design and industry best pallet transfers, they get your product to the exact location, at the exact time and in the exact position it needs to be. They are now available with new options such as heavy load corner modules with 150 lb. capacity and 180 degree tight transfer corners for compact loops.