Robots can be designed and programmed to get specific information that is beyond what our five senses can tell us. For instance, a robot sensor might “see” in the dark, detect tiny amounts of invisible radiation or measure movement that is too small or fast for the human eye to see.
Specialized softwares based on color segmentation, thermal image processing, dynamic object classification, etc. also go beyond processing two-dimensional images. Different types of sensor data, as video images and laser range data, allow to create complete, three-dimensional models of the Robot’s environment making possible to develop vehicle autonomy systems or assist operators of vehicles.
However most robots of today have few sensing capabilities. Sensors e.g. can give the robot controller information about its surroundings and let it know the position of the arm, or the state of the world around it, but their capabilities are still limited compared to the senses and abilities of even the simplest living things.
One of the largest gap between the reality of a robot and that of a human is perception. Today, biological systems vastly outperform conventional digital ones and overcaming this limitation is a new research trend.
The EU-funded research project Emorph leaded by the Istituto Italiano di Tecnologia in Genoa is an example of the effort made to advance in sensor technology. Specific sensors have been developed to give robots a better ability to see and consequently improve their ability to interact with their environment.
Sensors developed by Emorph can detect the variations of light at a much higher frequency than the ones normally used in robotics. That is an important achievement since light measurements are fundamental in tasks, such as catching objects and avoinding obstacles while moving around. In addition algorythms that the robots need to manage all the relevant data for such tasks have been significatively improved by the team of researchers who participated in the project.
Future reaseraches on sensors’ sight should achieves an even higher definition and measure the local gradient. The robot infact should understand what the light of a single pixel is in relation to the surrounding ones. This would allow the system to understand if in a specific area of the image there is an edge and if it is moving.