From Nvidia's CES press conference:
The DRIVE PX platform is based on the NVIDIA® Tegra® X1 processor, enabling smarter, more sophisticated advanced driver assistance systems (ADAS) and paving the way for the autonomous car.
Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. It is equipped with 10 GB of DRAM memory and combines surround Computer Vision (CV) technology, extensive deep learning training, and over-the-air updates to transform how cars see, think, and learn.
DEEP LEARNING COMPUTER VISION
Conventional ADAS technology today can detect some objects, do basic classification, alert the driver, and in some cases, stop the vehicle. DRIVE PX takes this to the next level with the ability to differentiate an ambulance from a delivery truck or a parked car from one about to pull into traffic. The system can now inform the driver, not just get their attention with a warning. The car is not just sensing, but interpreting what is taking place around it—an essential capability for auto-piloted driving... (more info)
From Empire Robotics:
The VERSABALL is a squishy balloon membrane full of loose sub-millimeter particles. The soft ball gripper easily conforms around a wide range of target object shapes and sizes. Using a process known as “granular jamming”, air is quickly sucked out of the ball, which vacuum-packs the particles and hardens the gripper around the object to hold and lift it. The object releases when the ball is re-inflated. VERSABALL comes in multiple head shapes and sizes that use the same pneumatic base... (Empire Robotics' site)
From Yezhou Yang, Yi Li, Cornelia Fermuller and Yiannis Aloimonos:
In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation.
The list of the grasping types.
Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.... (article at Kurzweilai.net) (original paper)
Winter break homework from John Carmack. Gamasutra reprint article "In-depth: Functional programming in C++":
A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in. In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention. Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible... (full article)
Also "Lessons to learn from Oculus development team when using the “Modern C++” approach":
Modern C++ doesn’t imply necessarly the overuse of templates
Andrei Alexandrescu says about the Modern C++ design:
"Modern C++ Design defines and systematically uses generic components - highly flexible design artifacts that are mixable and matchable to obtain rich behaviors with a small, orthogonal body of code."
Modern C++ has a close relation with generic programming; probably it’s the reason that makes many developers neglect the modern C++ approach. They think that the code will be mostly implemented as templates, which makes the code difficult to read and maintain.
In the SDK, the templates represent only 20% of all types defined and most of them are related to the technical layer... (full article)
From MODLAB (The modular robotics laboratory at the University of Pennsylvania):
We derive thrust, roll, and pitch authority from a single propeller and single motor through an underactuated mechanism embedded in the rotor itself. This allows new types of conventionally-capable micro air vehicles which only require two motors for practical control. This contrasts with the many servos and linkages of conventional helicopters or the many drive motors found in quadrotors... (cont'd)
From iRobot ($199 US):
Create 2 is a mobile robot platform built from remanufactured Roomba robots and designed for use by educators, developers and high-school and college-age students. Program or build your own projects or start with our sample projects provided online. Create 2 is ready to go, right out of the box, so there is no need to assemble the drive system or worry about low-level code. Other Create 2 features include:
- Serial cable sends commands from a computer or other microcontroller to the robot
- Preprogrammed behaviors can be controlled via Open Interface Commands
- Built-in sensors allow the robot to react to its environment
- Drill template on faceplate shows safe drilling areas. Removing the faceplate exposes the serial port.
- Robot returns to Home Base to dock and recharge. Rechargeable battery charges in three hours.
- Compatible with Roomba 600 Series accessories including batteries, Home Base®, remote control and Virtual Wall®
What are some of the things I can do with iRobot Create 2?
Program movements, sounds and the LED display, as well as read all of the robot's onboard sensors
Add an external computer or microcontroller with additional sensors and actuators to transform Create into exactly the robot you want. Add a camera to build your own camera bot! Use our 3D printable file to create a storage bin and ensure your additional electronics are safely housed within the robot's chassis... (details)
From Ishikawa Watanabe Laboratory:
We have been developing robotic systems that individually achieve fundamental actions of baseball, such as throwing, tracking of the ball, batting, running, and catching. We achieved these tasks by controlling high-speed robots based on real-time visual feedback from high-speed cameras. Before integrating these abilities into one robot, we here summarize the technical elements of each task... (site)
Ten years ago, WIRED contributing editor Joshua Davis wrote a story about four high school students in Phoenix, Arizona—three of them undocumented immigrants from Mexico—beating MIT in an underwater robot competition. That story, La Vida Robot, has a new chapter: Spare Parts, starring George Lopez and Carlos PenaVega, opens in January, and Davis is publishing abook by the same title updating the kids’ story. To mark that occasion, WIRED is republishing his original story... (full article)
From Biomimetics MIT Cheetah project:
The high speed legged locomotion of the MIT Cheetah requires high accelerations and loadings of the robot’s legs. Because of the highly dynamic environmental interactions that come with running, variable impedance of the legs is desirable; however, existing actuation strategies cannot deliver. Typically, electric motors achieve their required torque output and package size through high gear ratios. High ratios limit options for control strategies. For example, closed loop control is limited to relatively slow speed dynamics. Series elastic actuation adds additional actuators and increases system complexity and inertia. We believed a better option existed. In the end, we developed a novel actuator, optimal in many applications... (project homepage) (full published article)
Welcome to the Black Friday sale – 15% off plus all the free items & shipping as you shop! Use code: BLACKFRIDAY on check out. We thought about doing flash sales or complicated codes but that’s a lot of frustrating hoop jumping for everyone, so we came up with what we think is an amazing deal that is straight forward, no stress and valuable – a 15% off discount anything in stock and lots of great free things automatically depending on how much you order.
We are currently offering a FREE Adafruit Perma-Proto Half-sized Breadboard PCB for orders over $100, a FREE Trinket 5V for orders over $150, FREE UPS ground (Continental USA) for orders $200 or more, a FREE Pro Trinket 5V for orders over $250
On Cyber Monday (12/1), everything in our Actobotics category is 20% off.
Next on 12/1/2014, we are offering hourly flash sales from 7 a.m. to 7 p.m. Mountain Standard Time, with 30-50% off on some of our most popular products.
These items have been hand-selected by our employees and are some of our favorite designs! See below for the complete list, so you can plan ahead to snag these great deals.
- Flash Sales are ONLY valid during their time window. If an item is sitting in your cart and the flash sale for it ends, the price will go back up!
- There is no combining flash sale orders throughout the day.
- Flash sales are a “while supplies last” sort of deal (which means no backorders!) - so get ‘em while the getting is good.... (list of flash sale items)
(Full pdf flyer for monday's sale) Two random orders from monday will recieve a "golden ticket" worth $500 of actobotics parts.
(Full list of sale items available friday through monday) Free hexbug nano with every purchase.
From the OpenCV Foundation:
OpenCV Foundation with support from DARPA and Intel Corporation are launching a community-wide challenge to update and extend the OpenCV library with state-of-art algorithms. An award pool of $50,000 is provided to reward submitters of the best performing algorithms in the following 11 CV application areas: (1) image segmentation, (2) image registration, (3) human pose estimation, (4) SLAM, (5) multi-view stereo matching, (6) object recognition, (7) face recognition, (8) gesture recognition, (9) action recognition, (10) text recognition, (11) tracking.
The OpenCV Vision Challenge Committee will judge up to five best entries.
You may submit a new algorithm developed by yourself or your implementation of an existing algorithm even if you are not the author of the algorithm.
You may enter any number of categories.
If your entry wins the contest you will be awarded $1K.
To win an additional $7.5 to $9K, you must contribute the source code as an OpenCV pull request under a BSD license.
You acknowledge that your contributed code may be included, with your copyright, in OpenCV.
You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR.
Submission Period: Now – May 8th 2015
Winners Announcement: June 8th 2015 at CVPR 2015
The K5, built by the Californian company Knightscope, is billed rather euphemistically as an “autonomous data machine” that provides a “commanding but friendly physical presence.” Basically, it’s a security guard on wheels. Inside that rather large casing (it’s 5 foot tall!) there are four high-def cameras facing in each direction, another camera that can do car license plate recognition, four microphones, gentle alarms, blaring sirens, weather sensors, and WiFi connectivity so that each robot can contact HQ if there’s some kind of security breach/situation. For navigating the environment, there’s GPS and “laser scanning” (LIDAR I guess). And of course, at the heart of each K5 is a computer running artificial intelligence software that integrates all of that data and tries to make intelligent inferences... (full article) (knightscope)
Because of the Nov. 14th submission deadline for this years IEEE Conference on Computer Vision and Pattern Recognition (CVPR) several big image-recognition papers are coming out this week:
From Andrej Karpathy and Li Fei-Fei of Stanford:
We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations... (website with examples) (full paper)
From Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan at Google:
From Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel at University of Toronto:
Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models (full paper)
From Junhua Mao, Wei Xu, Yi Yang, Jiang Wang and Alan L. Yuille at Baidu Research/UCLA:
Explain Images with Multimodal Recurrent Neural Networks (full paper)
From Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell at UT Austin, UMass Lowell and UC Berkeley:
Long-term Recurrent Convolutional Networks for
Visual Recognition and Description (full paper)
All these came from this Hacker News discussion.
DJI's new Inspire 1 with 4K camera($2899 USD from DJI):
Hovering Accuracy (GPS Mode)
|Vertical: 1.6' / 0.5 m
Horizontal: 8.2' / 2.5 m
|Maximum Angular Velocity||Pitch: 300°/s
|Maximum Tilt Angle||35°/s|
|Maximum Ascent/Descent Speed||Ascent: 16.4 fps / 5 m/s
Descent: 13.1 fps / 4 m/s
|Maximum Speed||72.2 fps / 22 m/s (Attitude mode; no wind)|
|Maximum Flight Altitude||14,764' / 4,500 m|
|Maximum Wind Speed Resistance||32.8 fps / 10 m/s|
|Maximum Flight Time||Up to 18 minutes|
|Sensor||Sony EXMOR 1/2.3" CMOS|
|Lens||Field of View: 94°
Focal Length (35 mm Equivalent): 20 mm
Design: 9 elements in 9 groups; aspherical lens element
Filters: Anti-distortion filter; UV filter
|Video Recording||UHD (4K):
4096 x 2160: 24p, 25p
3840 x 2160: 24p, 25p, 30p
1920 x 1080: 24p, 25p, 30p, 48p, 50p, 60p
1280 x 720: 24p, 25p, 30p, 48p, 50p, 60p
Maximum Biterate: 60 Mbp/s
|File Format||Photo: JPEG, DNG
Video: MP4 in a .MOV wrapper （MPEG-4 AVC/H.264）
|Recording Media||Type: microSD/SDHC/SDXC up to 64 GB
Speed: Class 10 or faster
|Photography Modes||Single shot
Burst: 3, 5, 7 frames per second (AEB: 3/5 frames per second; 0.7 EV bias)
|Operating Temperature||32 to 104°F / 0 to 40°C|
|Number of Axes||3-axis|
|Maximum Controlled Rotation Speed||Pitch: 120°/s
|Controlled Rotation Range||Pitch: -90° to +30°
|Angular Vibration Range||±0.03°|
|Output Power||Static: 9 W
In Motion: 11 W
|Operational Current||Static: 750 mA
In Motion: 900 mA
Video with required "Johnny Ives-alike" introductory speech:
From Project Beyond/Samsung:
Today we offer a sneak preview of Project Beyond,the world’s first true 3D 360˚ omniview camera. Beyond captures and streams immersive videos in stunning high-resolution 3D, and allows every user to enjoy their viewing experience in the way they see fit. It offers full 3D reconstruction in all directions, using stereo camera pairs combined with a top-view camera to capture independent left and right eye stereo pairs.
Project Beyond uses patent-pending stereoscopic interleaved capture and 3D-aware stitching technology to capture the scene just like the human eye, but in a form factor that is extremely compact. The innovative reconstruction system recreates the view geometry in the same way that the human eyes see, producing unparalleled 3D perception.
Project Beyond is not a product, but one of the many exciting projects currently being developed by the Think Tank Team, an advanced research team within Samsung Research America. This is the first operational version of the device, and just a taste of what the final system we are working on will be capable of. Once complete, we hope to deploy Project Beyond around the world to beautiful and noteworthy locations and events, and allow users to experience those locations as if they were really there. The camera system can stream real time events, as well as store the data for future viewing... (website)
Records 151 to 165 of 483