Visual perception and sensory Fusion in robotics
This project intends to advance in techniques for fusion of sensory data from heterogeneous and homogeneous sources, in order to improve both at the location and in guiding inference. The proposals that are made using a robotic platform that enable to assess the improvements is also validated.
Also this project proposes the development of a system that facilitates experimentation with mobile robots environments, established the following specific objectives:
One of the problems when working with robots in hazardous environments or even in remote environments is sensory deprivation . Imagine that sent a probe to Mars, the probe will have a set of sensors, specialized in extracting information from the crust of Mars. If one of these sensors is broken, i.e., a sensory element breakage occurs, this would imply the loss of perception of the desired object (in our case the crust of the planet or at least a part of it). By sensory fusion is could supplement the lack of damaged sensor.
Another problem is the limited spatial coverage . Normally a single sensor covers a reduced spatial range, combining data from several sensors we could get more coverage. Limited temporary coverage is when a sensor needs a certain time to obtain and transmit a certain measure. Obviously if we can merge in an efficient way the information of several sensors such limitations are reduced with the number of available sensors.
The sensory Imprecision is given by the very nature of the sensor. The measurements obtained by individual sensors are limited to the accuracy of the sensor used. How much more sensors of the same type have higher precision can obtain by merging the data.
Another problem in the field of Robotics is uncertainty. Uncertainty depends on the object that is being observed instead of the sensor. It occurs when certain features (such as occlusion), when the sensor can not measure all relevant attributes for the perception or observation is ambiguous. A single sensor is unable to reduce the uncertainty in its perception because of its limited view of the object.
The set of objectives of the project focuses on advance in techniques for fusion of sensory data from both heterogeneous and homogeneous sources to improve localization and guided inference. Is also validated the proposals that are made using a robotic platform that enable to assess the improvements in terms of:
Multiple sensors have an inherent redundancy that help provide information even in the case of a partial failure.
A sensor can look where others can't and can perform actions when others cannot.
A measure of a sensor is confirmed by means of other sensors
Merge information reduces the set of interpretations of a particular measure.
Increasing the dimensionality of the space of measures (for example a quality determined with two types of sensors measuring) system becomes less vulnerable to interference (eg. Use sound + vision).
When independent of the same property mediated resolution of the result merge is better that a measure of a single sensor.
Also in this project arises the development of a platform that facilitates experimentation with mobile robots environments, setting the following objectives:
Industrial Computing and Artificial Intelligence (i3a)
Universidad de Alicante
Dpto.Ciencia de la Computación e Inteligencia Artificial
Grupo Informática Industrial e Inteligencia Artificial
Carretera San Vicente s/n
03690 San Vicente del Raspeig
Tel: (+34) 96 590 3400
Fax: (+34) 96 590 3464
Carretera San Vicente del Raspeig s/n - 03690 San Vicente del Raspeig - Alicante - Tel. 96 590 3400 - Fax 96 590 3464