Saturday, November 6, 2010

NEXI - Robot with facial expressions

A LATEST INVENTION BY MIT MEDIA LAB IS A NEW ROBOT THAT IS ABLE TO SHOW VARIOUS FACIAL EXPRESSIONS SUCH AS 'SLANTING ITS EYEBROWS IN ANGER', OR 'RAISE THEM IN SURPRISE', AND SHOW A WIDE ASSORTMENT OF FACIAL EXPRESSIONS WHILE COMMUNICATING WITH PEOPLE.


This latest achievement in the field of Robotics is named NEXI as it is framed as the next generation robots which is aimed for a range of applications for personal robots and human-robot teamwork.

DESIGNING 

The head and face of NEXI were designed by Xitome Design which is a innovative designing and development company that specializes in robotic design and development. The expressive robotics started with a neck mechanism sporting 4 degrees of freedom (DoF) at the base, plus pan-tilt-yaw of the head itself. The mechanism has been constructed to time the movements so they mimic human speed. The face of NEXI has been specially designed to use gaze, eyebrows, eyelids and an articulate mandible which helps in expressing a wide range of different emotions.

The chassis of NEXI is also advanced. It has been developed by the Laboratory for Perceptual Robotics UMASS (University of Massachusetts), Amherst. This chassis is based on the uBot5 mobile manipulator. The mobile base can balance dynamically on two wheels. The arms of NEXI can pick up a weight of up to 10 pounds and the plastic covering of the chassis can detect any kind of human touch.

CYNTHIA BREAZEAL: HEAD OF THE PROJECT

This project was headed by Media Lab's Cynthia Breazeal, a well known robotics expert famous for earlier expressive robots such as Kismet. She is an Associate Professor of Media Arts and Sciences at the MIT. She named her new product as an MDS (mobile, dextrous, social) robot.


Nexi Robot

FEATURES OF NEXI

Except a wide range of facial expressions, Nexi has many other features. It has self-balancing wheels like the Segway transporter, to ultimately ride on. Currently it uses an additional set of supportive wheels to operate as a statically stable platform in its early stage of development. It has hands which can be used to manipulate objects, eyes (video cameras), ears (an array of microphones), and a 3-D infrared camera and laser rangefinder which support real-time tracking of objects, people and voices as well as indoor navigation.


0 comments:

Post a Comment

Twitter Delicious Facebook Digg Stumbleupon Favorites More