The GRASP Lab at the University of Pennsylvania have developed tiny versions of their quadrotor swarming robots. The swarm is able to align in complex formations and remain in formation while traveling through small areas like windows or doors.
The Raspberry Pi is a ARM based single-board computer developed to run Linux for just $25 ($35 for model B with WiFi). The board contains an ARM1176JZFS, with floating point, running at 700Mhz, and a Videocore 4 GPU. The GPU is capable of BluRay quality playback, using H.264 at 40MBits/s. It has a fast 3D core accessed using the supplied OpenGL ES2.0 and OpenVG libraries. The board has an HDMI port, one USB 2.0 port, one micro USB port for power, an audio jack, RCA video out and a SD card slot.
They aren't taking orders yet but you can track the status by joining their mailing list here.
OpenVSP (Vehicle Sketch Pad) is a parametric aircraft geometry tool. OpenVSP allows the user to create a 3D model of an aircraft defined by common engineering parameters. This model can be processed into formats suitable for engineering analysis. VSP allows even novices to quickly become proficient in defining three-dimensional, watertight aircraft geometry.
The predecessors to OpenVSP have been developed by J.R. Gloudemans and others for NASA since the early 1990's. On January 10 2012, OpenVSP was released as an open source project under the NASA Open Source Agreement (NOSA) version 1.3.
Windows, Mac or Linux versions are available here.
A set of video tutorials can be viewed here.
DIY Drones is an excellent community site for amateur unmanned vehicle development. Using Arduino as a foundation they have created the ArduPilotMega universal autopilot autopilot hardware. It combines sophisticated IMU-based autopilot electronics with free Arduino-based autopilot software that can turn any RC vehicle into a fully-autonomous UAV. The software is open source and currently has three variations: Arduplane, for any fixed-wing aircraft, Arducopter, for any rotary-wing aircraft, and ArduRover, for any ground or water-based vehicle. The site has an active forum as well as a store where you can purchase all the ArduPilot Mega Controller, other Arduino hardware or complete Drone kits.
STMicroelectronics has unveiled a smart suit prototype with sewn-in new multi-sensor ST iNEMO® motion co-processor that recognize complex movements of the wearer’s body and translate them to a digital model with outstanding precision and speed. The current-generation prototype of ST’s body-motion reconstruction suit demonstrates the optimal performance of the miniaturized iNEMO multi-sensor nodes attached on each arm, forearm, thigh, calf, and two on the back; additional nodes can be mounted on hands, on shoes or on the head. Tests with realistic, complex human body motions have proven the outstanding precision and speed of ST’s body-motion reconstruction suit, with deviation in spatial accuracy below 0.5 degrees during movements and the time needed to process and apply the sensor data to the skeleton model less than 15 milliseconds.
At CES Texas Instruments was showing off its new open source platform MAVRK. MAVRK is an acronym for Modular and Versatile Reference Kit. The MAVRK platform contains a motherboard, several MAVRK modules, and firmware to communicate between the modules. MAVRK modules are reference designs around TI silicon that will connect to the motherboard with a common footprint. With several modules connected, a user can configure multiple combinations of RF, AD/DA, transceivers, signal conditioning, and driver circuits as a system-level design. In the video one motherboard hosts the TCA8418 keypad scanner to sense user input. The MSP430F5438A on this motherboard decodes the keypad input for motor spin speed and the joystick direction for spin direction. It then beams this information to a second motherboard connected to robotic treads, with an MSP430F5137 processing the signals, and an MSP430F2274 controlling the signals to the DRV88414 motor driver. The DRV8814 drives the two motors that turn the robot treads.
Earlier this week Microsoft announced that they would officially be bringing their Kinect hardware to the Windows platform. The hardware is the mostly the same but new firmware allows the depth camera to see objects as close as 50cm away without losing accuracy or precision. Microsoft also says Kinect for Windows is 20% faster than it was in the last release and the accuracy rate of skeletal tracking and joint recognition have been substantially improved.
Microsoft has allowed the beta SDK to be used with Xbox Kinect and will continue to allow existing projects access to the SDK but they also state that all future projects will need to purchase the Kinect For Windows hardware in order to have access to upcoming SDK releases. Kinect For Windows and the SDK will cost $249 ($149 for an academic license).
Hyperspectral imaging, also called imaging spectroscopy, is a method of obtaining the spectral content of each pixel in a 2D image. The spectral data can be used into identify the chemical compounds or materials. Up till now hyperspectral imaging devices have been very expensive, starting at around $25,000 dollars. Engineers at the Vienna University of Technology and the University of Arizona have shown that they can perform CTIS spectral imaging using an unmodified consumer camera. The device they have developed can be used in a hyperspectral imaging mode that allows the spectral measurement of a whole image with up to 5-nm spectral resolution and 120 x 120-pixel spatial resolution and can be built for under $1000.
Swiss architects Gramazio & Kohler and Raffaello D’Andrea are a fully automated construction project at the FRAC Centre in Orléans, France that uses flying robots to assemble a six meter high tower constructed of 1500 polystyrene foam bricks. The exhibit lasts from up to February 19, 2012. The same team previously used a robot called "R-O-B" to build a looping wall in New York and the award-winning Structural Oscillations installation at the 2008 Venice Architecture Biennale.
Aldebaran Robotics just released a promo video for their next NAO robot. The new model includes 2 cameras, 4 microphones, sonar rangefinder, 2 IR emitters and receivers, 1 inertial board, 9 tactile sensors, and 8 pressure sensors. NAOqi, their proprietary embedded software, provides functionality for task such as speech recognition, object recognition, and access to all the sensors. Code development can take place in Windows, Mac OS, or Linux and be called from many languages, including C++, Python, Urbi, and .Net.
Humans are good at recognizing full facial expressions which present a rich source of affective information. However, psychological studies have shown that affect also manifests itself as micro-expressions. These are very rapid 1/3 to 1/25 second involuntary facial expressions which give a brief glimpse to feelings that people undergo but try not to express. Researchers at Oxford University and Oulu University are developing software that can recognize these ‘micro-expressions’. The initial experiments do indicate that the approach can distinguish deceptive from truthful micro-expressions, but further experiments need to be conducted to confirm it. The full paper is available here.
Professor George Whitesides, Robert Shepherd and their colleagues from Harvard University have designed a prototype soft, agile robot capable of crawling and squeezing under obstacles current rigid robots are can't handle. The flexible robot motion is controlled by a series of chambers within the 'elastomer' layer that can be inflated with compressed air. The air is fed through tubes attached to the robot.
Records 406 to 420 of 483