Saltar apartados
  • UA
  • i3a
  • Visual perception and sensory fusion in Robotics

Visual perception and sensory fusion in Robotics

Visual perception and sensory Fusion in robotics

Summary

This project intends to advance in techniques for fusion of sensory data from heterogeneous and homogeneous sources, in order to improve both at the location and in guiding inference. The proposals that are made using a robotic platform that enable to assess the improvements is also validated.

Also this project proposes the development of a system that facilitates experimentation with mobile robots environments, established the following specific objectives:

  1. Integration of the architecture of control with different systems of perception (sonar and vision).
  2. Development of an environment of agents that allow the easy evaluation and experimentation with intelligent various techniques to carry out processes of sensory fusion and adjustment of the different components of the architecture , such as rules of behaviour, learning of multi-objective metacontroladores for fusion of behaviors, application of problem-solving algorithms for performance evaluation functions.

Objectives and interest of these

One of the problems when working with robots in hazardous environments or even in remote environments is sensory deprivation . Imagine that sent a probe to Mars, the probe will have a set of sensors, specialized in extracting information from the crust of Mars. If one of these sensors is broken, i.e., a sensory element breakage occurs, this would imply the loss of perception of the desired object (in our case the crust of the planet or at least a part of it). By sensory fusion is could supplement the lack of damaged sensor.

Another problem is the limited spatial coverage . Normally a single sensor covers a reduced spatial range, combining data from several sensors we could get more coverage. Limited temporary coverage is when a sensor needs a certain time to obtain and transmit a certain measure. Obviously if we can merge in an efficient way the information of several sensors such limitations are reduced with the number of available sensors.

The sensory Imprecision is given by the very nature of the sensor. The measurements obtained by individual sensors are limited to the accuracy of the sensor used. How much more sensors of the same type have higher precision can obtain by merging the data.

Another problem in the field of Robotics is uncertainty. Uncertainty depends on the object that is being observed instead of the sensor. It occurs when certain features (such as occlusion), when the sensor can not measure all relevant attributes for the perception or observation is ambiguous. A single sensor is unable to reduce the uncertainty in its perception because of its limited view of the object.

The set of objectives of the project focuses on advance in techniques for fusion of sensory data from both heterogeneous and homogeneous sources to improve localization and guided inference. Is also validated the proposals that are made using a robotic platform that enable to assess the improvements in terms of:

  • Robustness and reliability.

    Multiple sensors have an inherent redundancy that help provide information even in the case of a partial failure.

  • Increasing the spatial and temporal coverage.

    A sensor can look where others can't and can perform actions when others cannot.

  • Increase confidence.

    A measure of a sensor is confirmed by means of other sensors

  • It reduces ambiguity and uncertainty.

    Merge information reduces the set of interpretations of a particular measure.

  • Robustness to interference.

    Increasing the dimensionality of the space of measures (for example a quality determined with two types of sensors measuring) system becomes less vulnerable to interference (eg. Use sound + vision).

  • Improved resolution.

    When independent of the same property mediated resolution of the result merge is better that a measure of a single sensor.

Also in this project arises the development of a platform that facilitates experimentation with mobile robots environments, setting the following objectives:

  1. Integration of the architecture of control with different systems of perception (sonar and vision). Arises as a basis for the planning of tasks, through the following sub-objectives:
    1. Integration of techniques of vision in architecture , such as one more deliberative agent. This agent will coordinate with the Scheduler of tasks and objectives, so that, depending on the context of the task, the Planner select objectives based on visual information places or positions. Thus reactive controllers can work with this information to perform tasks of tracking of objects or people
    2. Spatio-temporal segmentation using statistical techniques. Efficient extraction of color and motion information. Segmentation spatio-temporal regions with movement and color consistent. Detection and clustering of the movement. Filtering and grouping of color. Strategies of fusion of Visual cues. Statistical modeling. Robustness.
    3. Feedback and Visual Control. Tracking efficient using information extracted in the previous tasks. Forecast, update. Extraction of foreknowledge about the planned trajectories and the environment in which the track. Inference of the new position and orientation of the camera.
  2. Development of an environment of agents that allow the easy evaluation and experimentation with intelligent various techniques to carry out processes of sensory fusion and adjustment of the different components of the architecture , such as rules of behaviour, learning of multi-objective metacontroladores for fusion of behaviors, application of problem-solving algorithms for performance evaluation functions.
    1. Deployment techniques of sensory fusion based on stochastic methods. A very important point of behavior-based control is as efficiently coordinate conflicts and competition between different types of behaviors to achieve good performance based learning techniques using stochastic models based on fusion rules may result in an improvement of the performance of the robot. One of the advantages of this system is that the robot can be trained with only a few behaviors, and then if you want to add new ones only need to retrain the bases of fusion rules, preserving previous behaviors.
    2. Design, implementation and validation of algorithms multiobjetivo paths robust planning , within the integration of the view and control layers. In particular, we propose a multiobjective evolutionary approach to carry out the search for the best path based on certain criteria. On the other hand, the robustness of the path will be introducing an additional criterion of cumulative Gaussian noise.

Industrial Computing and Artificial Intelligence (i3a)


Universidad de Alicante
Dpto.Ciencia de la Computación e Inteligencia Artificial
Grupo Informática Industrial e Inteligencia Artificial

Carretera San Vicente s/n
03690 San Vicente del Raspeig
Alicante (Spain)

Tel: (+34) 96 590 3400

Fax: (+34) 96 590 3464

For further information: informacio@ua.es, and for enquiries on this web server: webmaster@ua.es

Carretera San Vicente del Raspeig s/n - 03690 San Vicente del Raspeig - Alicante - Tel. 96 590 3400 - Fax 96 590 3464