29.9.09
iCub the robot helps scientists understand humans
10.9.09
Wrestling with robot snakes
But they are quite a handful to control ? as Johann Borenstein explained in his keynote speech at a recent meeting on rescue robotics. He built the snake robot in the video below ? called OmniTread ? and as you can see, it takes more than one operator to get it to climb through a fairly simple obstacle. I asked him a few questions about the challenge of operating snake robots.
NS: How many people are needed to control OmniTread?
JB: We currently need three operators. Each operator controls two joints of our six-joint OmniTread. Typically all joints need to be controlled at all times. Special cases [where only one controller is needed] such as driving straight along a long stretch of flat terrain, are rare.
NS: What do the operators have to do?
JB: Each joint has two degrees of freedom (i.e. controllable angles). Each operator controls one joint with one joystick on a commercial two-joystick game controller.
NS: Do all snake robots have this problem?
JB: Any snake-type robot that has many controllable joints will have that problem. It requires sophisticated software to address the very difficult control problem. I did develop a hardware-based control device that requires only one operator to control the OmniTread. A brief description of this unique device, called "Joysnake" is downloadable (pdf).
Source: www.newscientist.com
coca cola robots invade japan
25.8.09
Micro-robots for micro-manufacturing
An EU-funded consortium has built tiny robots capable of handling objects less than 100nm in size, as part of an effort to develop tools for manufacturing nanoscale devices.
The group, called NanoHand, has built two micro-robotic demonstrators that can automatically pick up and install carbon nanotubes. Thousands of times thinner than a human hair, carbon nanotubes are rolled up sheets of carbon just a few tens of nanometres in diameter, and they could become an essential part of the nanotechnologist’s construction kit.
"The handling and characterisation of these objects has become more and more important in materials science and nanotechnology," said nano-researcher Volkmar Eichhorn of the University of Oldenburg and its associated institute, OFFIS. "They have a huge application potential in various products."
The trouble is that nanotubes are too thin to see with a normal optical microscope. In addition, at this scale the intermolecular forces between objects are stronger than gravity, so once a nanotube has been picked up it will stick to the jaws of the gripper and cannot easily be dropped into position. The NanoHand team has had to develop novel pick-and-place techniques to get around this problem.
The robots, about two centimetres in size, work inside a scanning electron microscope, allowing an observer to follow their activities. Each has a microgripper that can make precise and delicate movements, using an electrothermal principle to open and close its tweezer-like jaws.
"The jaws open to about two micrometres and can pick up objects less than 100 nanometres in size. “[It is] really able to grip micro or even nano objects," Eichhorn said. "We have handled objects down to tens of nanometres.
"World-wide, we are the first project that has really realised the automated microgripper-based pick-and-place experiments," he added. "The new thing is the high accuracy and the small scale of the objects – in the range of tens or hundreds of nanometres – and the excellent control and software architecture being built around this whole set-up facilitating a high degree of automation."
Other groups are working on methods of handling nanotubes, especially in the USA, Japan and China, but the NanoHand system of microrobots and microgrippers is proving effective and reliable, claimed Eichhorn. "It’s very promising for nanotechnology applications," he said.
The first product built using NanoHand technology is already on the market - a scanning electron microscope with a carbon nanotube added to its tip to give it much improved resolution.
Now the project's industrial partners, who include STMicroelectronics, are looking at other potential applications, such as using carbon nanotubes for the interconnects within silicon chips. Because of their high electrical conductivity, carbon nanotubes dissipate less heat than copper and allow circuits to be packed more densely.
NanoHand received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project participants include British, Czech, Danish, German, Italian and Swiss organisations.
Source:http://kn.theiet.org
Is Albert Einstein robot too human? Everything’s relative
Albert Einstein has been re-created as a robot, right down to the unruly hair and luxuriant moustache, but the electronic version is no genius by human standards.
While it would be no use at proving the physicist’s unified field theory, the robot Einstein is extraordinary in that it can recognise and respond to human emotions.
This head-and-shoulders creation could shape mankind’s interaction with robots and determine just how human-like future robots should be.
Scientists also hope that it will ensure the development of empathetic robots, thus avoiding conflicts between man and machine.
The Einstein robot was designed by David Hanson, president of a Texas robotics company. The face was moulded from a flesh-like material called Frubber, which Mr Hanson engineered right down to its microscopic skin pores. It is manipulated by 31 motors around its mouth and eyes. The head was originally placed on top of a robot body in collaboration with the Kaist Hubo robotics group of Korea, forming a strange hybrid of physicist and angular white humanoid.
Einstein can recognise hundreds of facial expressions including sadness, anger, fear, happiness and confusion, as well as cues suggesting age and gender.
Scientists at the California Institute for Telecommunications and Information Technology at UC San Diego have introduced software to allow it to interact “naturally” with humans. The robot, which cost more than $75,000 (£50,000) to create, might turn its head, raise its eyebrows and smile. Javier Movellan, a research scientist at the university’s Machine Perception Laboratory, said that the effect could be startling.
It is hoped that the robot may help to teach children with autism better communication skills, and to improve intelligent tutoring systems, in which robots or avatars teach students. A key area of research will be the question of how closely robots should resemble humans. It appears that the more human-like the robot, the more uncomfortable it makes people feel.
This dilemma is crucial to robotics, which seeks to allow humans to interact with robots in a natural way. Mr Hanson said: “Some scientists believe strongly that very human-like robots are so inherently creepy that people can never get over it and interact with them normally.”
The goal is to develop a creative, intelligent machine that rivals or exceeds a human level of intelligence — and does so without compromising humanity.
“If things go really well, we’re maybe ten years away from that happening,” Mr Hanson said. “But it’s very important that we develop empathic machines, machines that have compassion, machines that understand what you’re feeling. If these robots do become as intelligent as human beings, we want this infrastructure of compassion and empathy to be in place so the machines are prepared to use their intellectual powers for the good of civilisation. In a way, we’re planting the seeds for the survival of humanity.”
Source:http://technology.timesonline.co.uk
Crowdsourcing the Complexities of Electronic Design Automation
Electronic design automation (EDA) is full of large, intricate problems. Figuring out the best way to arrange transistors on a chip, for example, becomes exponentially more complex as the number of transistors increases. Computer scientists have made great strides in developing algorithms that can solve many of these problems, but a team of researchers at the University of Michigan, in Ann Arbor, believes that the industry could benefit from a different resource: human intuition.
“These kinds of problems are difficult for computers to solve. We started by thinking, ‘How can humans help electronic design automation?’ ” says Valeria Bertacco, an associate professor in computer science and engineering. She and Andrew DeOrio, a doctoral student, have developed an online game that challenges players to take on a type of problem common in design automation. They presented their idea of human-assisted problem solving today at the Design Automation Conference, in San Francisco.
In large, complex EDA problems, there are initially millions of possible paths to a solution. It’s similar to a maze: At the beginning, you have to pick an initial path to explore and see where it leads. The problem for designers is that the number of solution paths increases exponentially with the number of variables. Even the best algorithms can get tripped up if they start down a search path with no possible solution.
Source: spectrum.ieee.org