Changing Automation with Sensitive Robots

Critics of automation claim robots are out to replace people. The near future will show that people are definitely in control and will be able to leverage technology for greater profitability and professional satisfaction.

Why I Automate: MPI Systems

The latest figures show that 179,000 industrial robots were sold worldwide in 2013, according to the International Federation of Robotics. That was a 12 percent jump over 2012. The increase is expected to continue when the final numbers are tallied for 2014.

The Industrial Automation Exchange

New online tool connects manufacturers with industrial automation expertise, products

Why I Automate: Manufacturer Finds Promising Gains with Automation

Job loss was on everyone's mind. Results showed that the opposite took place. Higher quality led to more orders and an increase in the number of jobs.

VERSABALL Beer Pong Robot

From Empire Robotics: The VERSABALL is a squishy balloon membrane full of loose sub-millimeter particles. The soft ball gripper easily conforms around a wide range of target object shapes and sizes. Using a process known as “granular jamming”, air is quickly sucked out of the ball, which vacuum-packs the particles and hardens the gripper around the object to hold and lift it. The object releases when the ball is re-inflated. VERSABALL comes in multiple head shapes and sizes that use the same pneumatic base... ( Empire Robotics' site )

Four Robotics Related Kickstarters For January

EVB: Replace the brain of your LEGO® EV3 with BeagleBone / X PlusOne: Your Ultimate Hover + Speed Aerial Camera Drone / The ProtoCam+: An easy way to build projects and prototypes around your Raspberry Pi Camera Module with your A+ and B+ Raspberry Pi / Tektyte: LogIT Specialised Circuit Testers

Robot Learning Manipulation Action Plans by "Watching" Unconstrained Videos from the World Wide Web

From Yezhou Yang, Yi Li, Cornelia Fermuller and Yiannis Aloimonos: In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots. The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation. The list of the grasping types. Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy.... ( article at Kurzweilai.net ) ( original paper )

2014 Robotics In A Word: "MORE"

The list of top stories in robotics for 2014 is a story of "MORE": MORE uses of and for robots; MORE serious discussions about robots; MORE robots in unusual places; MORE news in the financial press; MORE funding, acquisitions and IPOs; and MORE choices.

Intuitive Robot Programming for Flexible Aerospace Manufacturing

Robots are proving to be flexible tools for aircraft manufacturing and assembly. Their full potential however can be limited by the challenges of programming a robot in a CAD/CAM environment. Software that integrates offline programming, simulation, code generation, and path optimization makes the process seamless and error-free.

John Carmack On Modern C++

Winter break homework from John Carmack. Gamasutra reprint article "In-depth: Functional programming in C++": A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in. In a multithreaded environment, the lack of understanding and the resulting problems are greatly amplified, almost to the point of panic if you are paying attention. Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible... ( full article ) Also "Lessons to learn from Oculus development team when using the “Modern C++” approach": Modern C++ doesn’t imply necessarly the overuse of templates Andrei Alexandrescu says about the Modern C++ design: "Modern C++ Design defines and systematically uses generic components - highly flexible design artifacts that are mixable and matchable to obtain rich behaviors with a small, orthogonal body of code." Modern C++ has a close relation with generic programming; probably it’s the reason that makes many developers neglect the modern C++ approach. They think that the code will be mostly implemented as templates, which makes the code difficult to read and maintain. In the SDK, the templates represent only 20% of all types defined and most of them are related to the technical layer... ( full article )

Interview with David Sands, ST Robotics International

We call our robots entry level for a very good reason. People can get started so easily with our robots. They are not the best robot arms on the planet but they are affordable and incredibly easy to use.

Kissing It Better: Lifting Ergonomically with Vacuum

By using the right suction and the right lips for the job, you really can "kiss it better" before lifting, rather than after.

OpenCV Vision Challenge

From the OpenCV Foundation: OpenCV Foundation with support from DARPA and Intel Corporation are launching a community-wide challenge to update and extend the OpenCV library with state-of-art algorithms. An award pool of $50,000 is provided to reward submitters of the best performing algorithms in the following 11 CV application areas: (1) image segmentation, (2) image registration, (3) human pose estimation, (4) SLAM, (5) multi-view stereo matching, (6) object recognition, (7) face recognition, (8) gesture recognition, (9) action recognition, (10) text recognition, (11) tracking. Conditions: The OpenCV Vision Challenge Committee will judge up to five best entries. You may submit a new algorithm developed by yourself or your implementation of an existing algorithm even if you are not the author of the algorithm.  You may enter any number of categories.  If your entry wins the contest you will be awarded $1K. To win an additional $7.5 to $9K, you must contribute the source code as an OpenCV pull request under a BSD license.  You acknowledge that your contributed code may be included, with your copyright, in OpenCV. You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR. Timeline: Submission Period: Now – May 8th 2015  Winners Announcement: June 8th 2015 at CVPR 2015 (full details)

Robotic Assembly

Manual assembly procedures, which previously have been neither ergonomic nor appropriate for automation, can currently be automated in a cost-effective way.

Deep Visual-Semantic Alignments for Generating Image Descriptions

Because of the Nov. 14th submission  deadline for this years IEEE Conference on Computer Vision and Pattern Recognition (CVPR) several big image-recognition papers are coming out this week: From Andrej Karpathy and Li Fei-Fei of Stanford: We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations... ( website with examples ) ( full paper ) From Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan at Google: Show and Tell: A Neural Image Caption Generator  ( announcement post ) ( full paper ) From Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel at University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models  ( full paper ) From Junhua Mao, Wei Xu, Yi Yang, Jiang Wang and Alan L. Yuille at Baidu Research/UCLA: Explain Images with Multimodal Recurrent Neural Networks  ( full paper ) From Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell at UT Austin, UMass Lowell and UC Berkeley: Long-term Recurrent Convolutional Networks for Visual Recognition and Description ( full paper ) All these came from this Hacker News discussion .

Records 196 to 210 of 281

First | Previous | Next | Last

Factory Automation - Featured Product

KINGSTAR Soft PLC - Replace Your PLC with an EtherCAT-enabled Soft PLC for Real-Time Motion Control and Machine Vision

KINGSTAR Soft PLC - Replace Your PLC with an EtherCAT-enabled Soft PLC for Real-Time Motion Control and Machine Vision

The top machine builders are switching from proprietary hardware-based PLCs, like Allen-Bradley, TwinCAT, Mitsubishi and KEYENCE, to open standards-based EtherCAT-enabled software PLCs on IPCs. KINGSTAR provides a fully-featured and integrated software PLC based on an open and accessible RTOS. KINGSTAR Soft PLC also includes add-on or third-party components for motion control and machine vision that are managed by a rich user interface for C++ programmers and non-developers alike.