By using the right suction and the right lips for the job, you really can "kiss it better" before lifting, rather than after.
From the OpenCV Foundation: OpenCV Foundation with support from DARPA and Intel Corporation are launching a community-wide challenge to update and extend the OpenCV library with state-of-art algorithms. An award pool of $50,000 is provided to reward submitters of the best performing algorithms in the following 11 CV application areas: (1) image segmentation, (2) image registration, (3) human pose estimation, (4) SLAM, (5) multi-view stereo matching, (6) object recognition, (7) face recognition, (8) gesture recognition, (9) action recognition, (10) text recognition, (11) tracking. Conditions: The OpenCV Vision Challenge Committee will judge up to five best entries. You may submit a new algorithm developed by yourself or your implementation of an existing algorithm even if you are not the author of the algorithm. You may enter any number of categories. If your entry wins the contest you will be awarded $1K. To win an additional $7.5 to $9K, you must contribute the source code as an OpenCV pull request under a BSD license. You acknowledge that your contributed code may be included, with your copyright, in OpenCV. You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR. Timeline: Submission Period: Now – May 8th 2015 Winners Announcement: June 8th 2015 at CVPR 2015 (full details)
Manual assembly procedures, which previously have been neither ergonomic nor appropriate for automation, can currently be automated in a cost-effective way.
Because of the Nov. 14th submission deadline for this years IEEE Conference on Computer Vision and Pattern Recognition (CVPR) several big image-recognition papers are coming out this week: From Andrej Karpathy and Li Fei-Fei of Stanford: We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations... ( website with examples ) ( full paper ) From Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan at Google: Show and Tell: A Neural Image Caption Generator ( announcement post ) ( full paper ) From Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel at University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models ( full paper ) From Junhua Mao, Wei Xu, Yi Yang, Jiang Wang and Alan L. Yuille at Baidu Research/UCLA: Explain Images with Multimodal Recurrent Neural Networks ( full paper ) From Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell at UT Austin, UMass Lowell and UC Berkeley: Long-term Recurrent Convolutional Networks for Visual Recognition and Description ( full paper ) All these came from this Hacker News discussion .
Long article about Rodney Brooks co-founder of Rethink and former CTO at iRobot: ...Brooks cofounded the bedford-based iRobot in 1990, and his motivation, he explains, had something to do with vanity: “My thoughts on my self-image at the time was that I didn’t really want to be remembered for building insects.” Then he pauses for a moment and laughs. “But after that I started building vacuum-cleaning robots. And now there is a research group using Baxter to open stool samples. So now it’s shit-handling robots. I think maybe I should have quit while I was ahead. You know, that’s something no one ever says: ‘I hope my kid grows up to open stool samples... ( full article )
This disruptive technology enables Baxter to switch between tasks without retraining by using environmental markers, called Landmarks™, in conjunction with its existing, embedded vision system.
In this white paper, NEXCOM will explain how the NEXCOM IoT controller NIFE 100 provides a unique open-architecture solution with the configuration flexibility to surmount communication barriers in building the Factory-of- Things and supporting the necessary data communications for connecting the enterprise domain and the operation domain.
From Grabit Inc.: Enhanced Flexibility Grabit technology eliminates the need for part-specific grippers and minimizes gripper changeover, dramatically reducing costs and downtime. Gentle Handling Grabit grippers offer scratch and smudge-free handling with its clean grasping and eliminates the need to remove residue left by vacuum cups. Grabit’s uniform grasping effect eliminates high “point stresses” on large format glass sheets. Low Energy & Quiet Operations Grabit products operate at ultra-low energy levels providing cost savings and enabling mobile robot applications, and also offer quiet operations improving factory conditions and supporting the adoption of collaborative robots... ( homepage )
Replacing legacy application-specific integrated circuits (ASIC) with x86 architecture allows the company to deliver products just in time and to achieve about 20 percent of overhead in inventory.
Implementation of a current transducer is typically a straightforward affair. In the event that the output is not as expected, it must be understood that the source of the challenge may be rooted in the mechanical, magnetic or electric nature of the device.
From Japan Times :
iRobot Unveils Its First Multi-Robot Tablet Controller for First Responders, Defense Forces and Industrial Customers
From iRobot: The uPoint MRC system runs an Android-based app that standardizes the control of any robot within the iRobot family of unmanned vehicles. Utilizing the same intuitive touchscreen technology in use today on millions of digital devices, the uPoint MRC system simplifies robot operations including driving, manipulation and inspection, allowing operators to focus more on the mission at hand... ( full press release )
Integrated 2D Imaging Engine from Microscan Helps Improve Production Yield, Quality and Traceability at Each Step of the PCB Manufacturing Process
Case Study: Prodrive Technologies, The Netherlands
The total process of building a robot is first to identify a need, then defining the problem that must be overcome to accomplish the need.
Since the operator can work in the robot's workspace even when the robot is still in motion at full speed, there is much more collaboration between the operator and robot.
Records 301 to 315 of 375
Factory Automation - Featured Product
MICROMO launches the new MC3/MCS motion control family. The new high performance, intelligent controllers are optimized for use with FAULHABER motors, offer electronics for simple operation with state-of-the-art interfaces for multi-axis applications, and provide a motion control system solution with the most compact integration into industrial grade housing.