Project: Reconfigurable Ultra-Autonomous Novel Robots
In the general definition of a robot, Sense-Plan-Act, the sense-part require a highly efficient and detailed_x000D_ perception. The most important challenges for autonomous and effective robots in automatic environment_x000D_ recognizing/sensing are the capacity of the robots to see, understand and interact with three-dimensional_x000D_ real world. It is not a coincidence that the visual cortex of the human brain is the largest sensory part of the_x000D_ brain._x000D_ The problem doCO of autonomous robotics comprises of two major tasks with different key aspects. The_x000D_ first one covers exploration issues while creating accurate three dimensional maps. Here, relatively highresolution_x000D_ precise 3D data as well as fast and accurate matching algorithms are required to create consistent_x000D_ scenes. Compare the performance of a human running through a crowed area, an airport, or driving a car._x000D_ Recognizing and identifying an object from a video input turns out to be a very difficult problem. The_x000D_ problem stems from the fact that a single object can be viewed from an infinite number of ways. By rotating,_x000D_ obscuring, or scaling a single object, one can create multiple representations of an object - which makes the_x000D_ problem of matching the object to a database of objects very difficult. Depending on the organization of the_x000D_ database, the problem expands either linearly or exponentially whenever there are numerous objects that_x000D_ should be identified simultaneously._x000D_ The second task covers the exploration and navigation in known and unknown terrains. Real-time 3D_x000D_ computation of the scene in the moving direction of a robot is required to ensure obstacle avoidance,_x000D_ whereas the precision is secondary. The real-time capability is also mandatory for mapping and surveying_x000D_ tasks if environment dynamics are considered. Today, even though cameras provide even more than 30_x000D_ frames per second the existing CPU-based systems cannot execute the necessary cue-extraction and object_x000D_ recognition algorithms, when several cues should be extracted simultaneously, at a rate of more than 5 fps._x000D_ The CO reason for the low rates achieved is the fact that the various cue extraction and object recognition_x000D_ schemes are very CPU intensive tasks; for example it has been reported that robust approaches just for 3D_x000D_ object detection based on stereo processing in dense environment need the performance equivalent of that_x000D_ triggered by more than five high-end CPUs. Obviously, things are getting even more difficult when also_x000D_ considering the CPU load due to the navigation tasks._x000D_ In order to obtain a high performance 3D-perception system a significant amount of parallel computations_x000D_ are required. Given that the availability of image sensors with very high performance at a low cost the_x000D_ challenge is now to create a low cost, high performance Artificial Visual Cortex._x000D_ RUNNER aims at providing a framework based on which highly autonomous Robots with much better_x000D_ perception than the existing solutions, will be created. This innovative infrastructure will utilize state-of-theart_x000D_ reconfigurable devices(FPGAs); Nikitakis, Wyland and Rajan in their papers proved that those devices_x000D_ allow for extremely higher performance and power-efficient processing when implementing data_x000D_ manipulation methods such as 3D sensing/matching schemes as well as template and feature-based object_x000D_ recognition algorithms, while they can be reconfigured on real-time._x000D_ In order to achieve its aims RUNNER will:_x000D_ * Design and implement a family of innovative cue extraction modules supporting very high rates by taking_x000D_ full advantage of the high processing power provided by the high-end FPGAs._x000D_ * Design and implement real-time reconfigurable object sensing mechanisms, which will take advantage of_x000D_ the accurate and fast cue extraction schemes and the high processing power provided by the high-end_x000D_ FPGAs._x000D_ * Design and implement a novel navigation scheme based on the advanced perception provided by the_x000D_ proposed reconfigurable system._x000D_ * Design and implement a sophisticated 3D reconstructing system, tailored to the needs of the cueextraction_x000D_ modules, which will be implemented in FPGAs._x000D_ * Develop and implement the middleware for the seamless programming, configuration and management_x000D_ of the RUNNER infrastructure._x000D_ * Prototype and validate RUNNER’s complete infrastructure and demonstrate its efficiency and wide_x000D_ applicability in two real-world trials._x000D_ The ultimate objective of RUNNER is to deliver a reconfigurable prototype with excessive cross-doCO_x000D_ applicability. In RUNNER, we believe that in a few years there would be millions of robots in various_x000D_ application areas that will all be navigated in an autonomous manner based on 3D video capture; such_x000D_ robots can be efficiently and inexpensively built based on the provided innovative highly flexible_x000D_ infrastructure._x000D_ In order to achieve the above the consortium mobilizes a significant European cross-sectoral force from 5_x000D_ different countries that covers the whole chain of robotics and vision and embedded systems.
Acronym | RUNNER (Reference Number: 5527) |
Duration | 01/12/2010 - 01/05/2014 |
Project Topic | RUNNER aims at providing an innovative infrastructure, to be exploited for the creation of highly autonomous robots. It will utilize high-end reconfigurable devices, in order to allow for extremely high performance and power-efficient processing, when implementing 3D sensing/matching schemes. |
Project Results (after finalisation) |
RUNNER project resulted in significant contributions in the field of embedded systems and robotics, in particular in applications in computer vision, object detection, obstacle avoidance and FPGA prototyping and emulation._x000D__x000D_More specificallythe project results included_x000D_ 1. The design and implementation of innovative 3D Reconstruction and Object Detection algorithms and architectures able to deliver accurate perception to the RUNNER vision system _x000D_ 2. Design and development of state-of-the-art Field Programmable Gate Arrays (FPGAs) board_x000D_ 3. Development of initial prototypes of the architectures on the FPGA board_x000D_ 4. Design and development of software tools for synchronization of the different system modules and communication with external sensors and devices _x000D_ 5. Evaluation of the 3D Reconstruction and Object Detection Sub-systems in terms of processing speed, accuracy and hardware overheads_x000D_ 6. Validation of the 3D Reconstruction and Object Detection Sub-systems using real-time data acquisition_x000D_ 7. Validation of the 3D Reconstruction and Object Detection Sub-systems on the RUNNER FPGA Board_x000D_ 8. Validation of the 3D Reconstruction and Object Detection Sub-systems in Robotics Environments_x000D__x000D_ |
Network | Eurostars |
Call | Eurostars Cut-Off 4 |
Project partner
Number | Name | Role | Country |
---|---|---|---|
9 | Aldebaran Robotics | Partner | France |
9 | Algosystems S.A. | Coordinator | Greece |
9 | Ingenieria de Sistemas Intensivos en Software, S.L | Partner | Spain |
9 | Mälardalen University | Partner | Sweden |
9 | MEEQ AB (publ) | Observer | Sweden |
9 | SignalGeneriX | Partner | Cyprus |
9 | Telecommunication Systems Institute / Technical University of Crete | Partner | Greece |
9 | Universidad Politecnica de Madrid | Partner | Spain |
9 | University of Cyprus | Partner | Cyprus |