Toward the Dream of a Baseball Android

From Ishikawa Watanabe Laboratory: We have been developing robotic systems that individually achieve fundamental actions of baseball, such as throwing, tracking of the ball, batting, running, and catching. We achieved these tasks by controlling high-speed robots based on real-time visual feedback from high-speed cameras. Before integrating these abilities into one robot, we here summarize the technical elements of each task... ( site )

How 4 Mexican Immigrant Kids and Their Cheap Robot Beat MIT

From Wired: Ten years ago, WIRED contributing editor Joshua Davis wrote a story about four high school students in Phoenix, Arizona—three of them undocumented immigrants from Mexico—beating MIT in an underwater robot competition. That story, La Vida Robot, has a new chapter: Spare Parts, starring George Lopez and Carlos PenaVega, opens in January, and Davis is publishing abook by the same title updating the kids’ story. To mark that occasion, WIRED is republishing his original story... ( full article )

Optimal Actuator In MIT's Cheetah Robot

From Biomimetics MIT Cheetah project: The high speed legged locomotion of the MIT Cheetah requires high accelerations and loadings of the robot’s legs.  Because of the highly dynamic environmental interactions that come with running, variable impedance of the legs is desirable; however, existing actuation strategies cannot deliver.  Typically, electric motors achieve their required torque output and package size through high gear ratios.  High ratios limit options for control strategies.  For example, closed loop control is limited to relatively slow speed dynamics.  Series elastic actuation adds additional actuators and increases system complexity and inertia.  We believed a better option existed.  In the end, we developed a novel actuator, optimal in many applications...  ( project homepage ) ( full published article )

Black Friday & Cyber Monday

From Adafruit : Welcome to the Black Friday sale – 15% off plus all the free items & shipping as you shop! Use code: BLACKFRIDAY on check out. We thought about doing flash sales or complicated codes but that’s a lot of frustrating hoop jumping for everyone, so we came up with what we think is an amazing deal that is straight forward, no stress and valuable – a 15% off discount anything in stock and lots of great free things automatically depending on how much you order.  We are currently offering a  FREE Adafruit Perma-Proto Half-sized Breadboard PCB  for orders over $100, a  FREE Trinket 5V  for orders over $150,  FREE UPS ground (Continental USA) for orders $200 or more, a  FREE Pro Trinket 5 V for orders over $250 From Sparksfun: On Cyber Monday (12/1), everything in our Actobotics category is 20% off. Next on 12/1/2014, we are offering hourly flash sales from 7 a.m. to 7 p.m. Mountain Standard Time, with 30-50% off on some of our most popular products. These items have been hand-selected by our employees and are some of our favorite designs! See below for the complete list, so you can plan ahead to snag these great deals. Flash Sales are ONLY valid during their time window. If an item is sitting in your cart and the flash sale for it ends, the price will go back up! There is no combining flash sale orders throughout the day. Flash sales are a “while supplies last” sort of deal (which means no backorders!) - so get ‘em while the getting is good.... ( list of flash sale items ) From Servocity: ( Full pdf flyer for monday's sale ) Two random orders from monday will recieve a "golden ticket" worth $500 of actobotics parts. From Robotshop: ( Full list of sale items available friday through monday ) Free hexbug nano with every purchase.

OpenCV Vision Challenge

From the OpenCV Foundation: OpenCV Foundation with support from DARPA and Intel Corporation are launching a community-wide challenge to update and extend the OpenCV library with state-of-art algorithms. An award pool of $50,000 is provided to reward submitters of the best performing algorithms in the following 11 CV application areas: (1) image segmentation, (2) image registration, (3) human pose estimation, (4) SLAM, (5) multi-view stereo matching, (6) object recognition, (7) face recognition, (8) gesture recognition, (9) action recognition, (10) text recognition, (11) tracking. Conditions: The OpenCV Vision Challenge Committee will judge up to five best entries. You may submit a new algorithm developed by yourself or your implementation of an existing algorithm even if you are not the author of the algorithm.  You may enter any number of categories.  If your entry wins the contest you will be awarded $1K. To win an additional $7.5 to $9K, you must contribute the source code as an OpenCV pull request under a BSD license.  You acknowledge that your contributed code may be included, with your copyright, in OpenCV. You may explicitly enter code for any work you have submitted to CVPR 2015 or its workshops. We will not unveil it until after CVPR. Timeline: Submission Period: Now – May 8th 2015  Winners Announcement: June 8th 2015 at CVPR 2015 (full details)

Microsoft Deploys Autonomous Robot Security Guards in Silicon Valley Campus

From Extremetech: The K5, built by the Californian company Knightscope, is billed rather euphemistically as an “autonomous data machine” that provides a “commanding but friendly physical presence.” Basically, it’s a security guard on wheels. Inside that rather large casing (it’s 5 foot tall!) there are four high-def cameras facing in each direction, another camera that can do car license plate recognition, four microphones, gentle alarms, blaring sirens, weather sensors, and WiFi connectivity so that each robot can contact HQ if there’s some kind of security breach/situation. For navigating the environment, there’s GPS and “laser scanning” (LIDAR I guess). And of course, at the heart of each K5 is a computer running artificial intelligence software that integrates all of that data and tries to make intelligent inferences... ( full article ) ( knightscope )

Deep Visual-Semantic Alignments for Generating Image Descriptions

Because of the Nov. 14th submission  deadline for this years IEEE Conference on Computer Vision and Pattern Recognition (CVPR) several big image-recognition papers are coming out this week: From Andrej Karpathy and Li Fei-Fei of Stanford: We present a model that generates free-form natural language descriptions of image regions. Our model leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between text and visual data. Our approach is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate the effectiveness of our alignment model with ranking experiments on Flickr8K, Flickr30K and COCO datasets, where we substantially improve on the state of the art. We then show that the sentences created by our generative model outperform retrieval baselines on the three aforementioned datasets and a new dataset of region-level annotations... ( website with examples ) ( full paper ) From Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan at Google: Show and Tell: A Neural Image Caption Generator  ( announcement post ) ( full paper ) From Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel at University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models  ( full paper ) From Junhua Mao, Wei Xu, Yi Yang, Jiang Wang and Alan L. Yuille at Baidu Research/UCLA: Explain Images with Multimodal Recurrent Neural Networks  ( full paper ) From Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell at UT Austin, UMass Lowell and UC Berkeley: Long-term Recurrent Convolutional Networks for Visual Recognition and Description ( full paper ) All these came from this Hacker News discussion .

DJI Inspire 1

DJI's new Inspire 1 with 4K camera( $2899 USD from DJI ): Aircraft specs: Hovering Accuracy (GPS Mode) Vertical: 1.6' / 0.5 m Horizontal: 8.2' / 2.5 m Maximum Angular Velocity Pitch: 300°/s Yaw: 150°/s Maximum Tilt Angle 35°/s Maximum Ascent/Descent Speed Ascent: 16.4 fps / 5 m/s Descent: 13.1 fps / 4 m/s Maximum Speed 72.2 fps / 22 m/s (Attitude mode; no wind) Maximum Flight Altitude 14,764' / 4,500 m Maximum Wind Speed Resistance 32.8 fps / 10 m/s Maximum Flight Time Up to 18 minutes Camera: Model Name: X3 Designation: FC350 Sensor Sony EXMOR 1/2.3" CMOS Resolution 12.0 MP Lens Field of View: 94° Focal Length (35 mm Equivalent): 20 mm  Aperture: f/2.8 Design: 9 elements in 9 groups; aspherical lens element Filters: Anti-distortion filter; UV filter Video Recording UHD (4K): 4096 x 2160: 24p, 25p 3840 x 2160: 24p, 25p, 30p FHD (1080p): 1920 x 1080: 24p, 25p, 30p, 48p, 50p, 60p HD (720p): 1280 x 720: 24p, 25p, 30p, 48p, 50p, 60p Maximum Biterate: 60 Mbp/s File Format Photo: JPEG, DNG Video: MP4 in a .MOV wrapper (MPEG-4 AVC/H.264) Recording Media Type: microSD/SDHC/SDXC up to 64 GB Speed: Class 10 or faster Format: FAT32/exFAT Photography Modes Single shot Burst: 3, 5, 7 frames per second (AEB: 3/5 frames per second; 0.7 EV bias) Time-lapse Operating Temperature 32 to 104°F / 0 to 40°C Gimbal: Model Zenmuse X3 Number of Axes 3-axis Control Accuracy ±0.03° Maximum Controlled Rotation Speed Pitch: 120°/s Pan: 180°/s Controlled Rotation Range Pitch: -90° to +30° Pan: ±330° Angular Vibration Range ±0.03° Output Power Static: 9 W In Motion: 11 W Operational Current Static: 750 mA In Motion: 900 mA Mounting Detachable   Video with required "Johnny Ives-alike" introductory speech:

Project Beyond: 360° 3D Camera

From Project Beyond/Samsung: Today we offer a sneak preview of Project Beyond,the world’s first true 3D 360˚ omniview camera. Beyond captures and streams immersive videos in stunning high-resolution 3D, and allows every user to enjoy their viewing experience in the way they see fit. It offers full 3D reconstruction in all directions, using stereo camera pairs combined with a top-view camera to capture independent left and right eye stereo pairs. Project Beyond uses patent-pending stereoscopic interleaved capture and 3D-aware stitching technology to capture the scene just like the human eye, but in a form factor that is extremely compact. The innovative reconstruction system recreates the view geometry in the same way that the human eyes see, producing unparalleled 3D perception. Project Beyond is not a product, but one of the many exciting projects currently being developed by the Think Tank Team, an advanced research team within Samsung Research America. This is the first operational version of the device, and just a taste of what the final system we are working on will be capable of. Once complete, we hope to deploy Project Beyond around the world to beautiful and noteworthy locations and events, and allow users to experience those locations as if they were really there. The camera system can stream real time events, as well as store the data for future viewing... ( website )

Atlas Karate Kid

Atlas robot at IHMC standing on a stack of cinder blocks doing various poses. Robot is built by Boston Dynamics.

FPV Racing League

International FPV Multicopter Racing League: EQUIPMENT In the spec class, competitors must use a quadcopter with 2300Kv motors, 3S LiPo and 5" props. This is to ensure a level playing field amongst competitors with different budgetary constraints. SCORING 10 points will be awarded for 1st place, 8 points for 2nd place, 5 points for 3rd place and 1 point for 4th place. These results will be recorded on the regional leaderboard, with the champions at the end of each season being invited to a national competition. OBSTACLES Throughout the course there will be obstacles such as hoops. Missing an obstacle will incur a time penalty. These obstacles should be made clearly visible with brightly-colored material or flashing lights... ( Official page ) ( Subreddit ) ( Next event Brisbane, Dev 7 )

Boston Magazine Profiles Rodney Brooks of Rethink

Long article about Rodney Brooks co-founder of Rethink and former CTO at iRobot: ...Brooks cofounded the bedford-based iRobot in 1990, and his motivation, he explains, had something to do with vanity: “My thoughts on my self-image at the time was that I didn’t really want to be remembered for building insects.” Then he pauses for a moment and laughs. “But after that I started building vacuum-cleaning robots. And now there is a research group using Baxter to open stool samples. So now it’s shit-handling robots. I think maybe I should have quit while I was ahead. You know, that’s something no one ever says: ‘I hope my kid grows up to open stool samples... ( full article )

Reverse OCR

From Reverse OCR's tumblr: I am a bot that grabs a random word and draws semi-random lines until the OCRad.js library recognizes it as the word. By Darius Kazemi, creator of  Alternate Universe Prompts ,  Museum Bot , and  Scenes from The Wire ... ( see the latest results )

3D Reconstruction Firm Paracosm Has Closed $3.3 Million In Seed Funding

From Paracosm: Paracosm, a cloud-based software company, raised 3.3 million in seed round funding to further its mission to 3D-ify the world. The round, led by Atlas Venture, includes contributions from iRobot, Osage University Partners, BOLDstart Ventures, New World Angels, Deep Fork Capital and a number of angel investors.  Paracosm's advanced three-dimensional reconstruction technologies create digital models of physical spaces. When shared with machines, these models serve as blueprints which provide robots and applications a greater sense of awareness and understanding of the physical world. Such technologies are valuable for robotics, video game development, special effects, indoor navigation applications, and for the improvement of both virtual and augmented reality experiences... ( full press release )

MegaBots: Live-Action Giant Robot Combat

From MegaBots Kickstarter: The mad scientists at MegaBots, Inc. have been zealously working on the prototypes and final design of 15-foot-tall, 15,000-pound, walking humanoid combat robots with giant, modular pneumatic cannons for arms. A driver-and-gunner team pilot each MegaBot in a battle against other MegaBots, vehicles, and a variety of other defenses and obstacles in live-action combat – the likes of which the world has only dreamed of through video games and movies... ...At our minimum funding level ($1.8M), we can build two robots. They’ll duke it out in an epic 1-on-1 deathmatch tournament. At higher funding levels, we can build more MegaBots and unlock the gameplay options you know and love: team deathmatches, free-for-alls, king of the hill, capture the flag, home base capture, escort missions, and more!  ( Kickstarter )

Records 1216 to 1230 of 1551

First | Previous | Next | Last

Featured Product

Bota Systems - The SensONE 6-axis force torque sensor for robots

Bota Systems - The SensONE 6-axis force torque sensor for robots

Our Bota Systems force torque sensors, like the SensONE, are designed for collaborative and industrial robots. It enables human machine interaction, provides force, vision and inertia data and offers "plug and work" foll all platforms. The compact design is dustproof and water-resistant. The ISO 9409-1-50-4-M6 mounting flange makes integrating the SensONE sensor with robots extremely easy. No adapter is needed, only fasteners! The SensONE sensor is a one of its kind product and the best solution for force feedback applications and collaborative robots at its price. The SensONE is available in two communication options and includes software integration with TwinCAT, ROS, LabVIEW and MATLAB®.