Projects

Imagine the implementation of mobile robotics in your production scenario would work this way. The mobile manipulation system you bought is equipped with state-of-the-art techniques for navigation, perception and manipulation. You can set up your application in just a few simple steps and the built- in-intelligence does all the rest.

Even if the first solution of your robot application may not show optimal results, its performance increases after each shift, without tuning or repeated visits of your system integrator. The depicted scenario is not a dream; it is the envisioned goal of the RobDREAM action, where leading scientists and technology providers will help robots improving their daily work over night.

Clutter in an open world is a challenge for many aspects of robotic systems, especially for autonomous robots deployed in unstructured domestic settings, affecting navigation, manipulation, vision, human robot interaction and planning.
SQUIRREL addresses these issues by actively controlling clutter and incrementally learning to extend the robot’s capabilities while doing so. We term this the B3 (bit by bit) approach, as the robot tackles clutter one bit at a time and also extends its knowledge continuously as new bits of information become available. SQUIRREL is inspired by a user driven scenario, that exhibits all the rich complexity required to convincingly drive research, but allows tractable solutions with high potential for exploitation. We propose a toy cleaning scenario, where a robot learns to collect toys scattered in loose clumps or tangled heaps on the floor in a child’s room, and to stow them in designated target locations.
We will advance science w.r.t. manipulation, where we will incrementally learn grasp affordances with a dexterous hand; segmenting and learning objects and object category models from a cluttered scene; localisation and navigation in a crowded and changing scene based on incrementally built 3D environment models; iterative task planning in an open world; and engaging with multiple users in a dynamic collaborative task.

This project desires to develop a system which enables a robot to learn tasks by observing a human user interacting with objects. With current techniques, a robot learns a task such as moving a bottle from the shelf to the table, by using kinesthetic training, manual scripting, or lengthy programming procedures. This project aims for a system that will render the teaching process easy also to the non-expert. It emphasizes the idea of learning-through-observation in a largely unconstrained environment. The robot observes the actions of the human operator by tracking their hands and by automatically detecting and tracking the manipulated object. From this data, the robot will learn a model to repeat the task in an unsupervised way. To this end, new methods from computer vision as well as robot learning and manipulation will be developed, and the performance of existing methods will be improved. The project is funded by the Baden-Württemberg Stiftung gGmbH.

Part handling during the assembly stages in the automotive industry is the only task with automation levels below 30% due to the variability of the production and to the diversity of suppliers and parts. The full automation of such task will not only have a huge impact in the automotive industry but will also act as a cornerstone in the development of advanced mobile robotic manipulators capable of dealing with unstructured environments. Such automation will opening new possibilities in general for manufacturing SME's. The STAMINA project will use a holistic approach by partnering with experts in each necessary key fields, thus building on previous R&D to develop a fleet of autonomous and mobile industrial robots with different sensory, planning and physical capabilities for jointly solving three logistic and handling tasks: De-palletizing, Bin-Picking and Kitting. The robot and orchestration systems will be developed in a lean manner using an iterative series of development and validation testes that will not only assess the performance and usability of the system but also allow goal-driven research. STAMINA will give special attention to the system integration promoting and assessing the development of a sustainable and scalable robotic system to ensure a clear path for the future exploitation of the developed technologies. In addition to the technological outcome, STAMINA will allow to give an impression on how a sharing of work and workspace between humans and robots could look in the future.

Urban areas are highly dynamic and complex and introduce numerous challenges to autonomous robots. They require solutions to several complex problems regarding the perception of the environment, the representation of the robot's workspace, models for the expected interaction with users to plan actions, state estimation as well as the states of all dynamic objects, the proper interpretation of the gathered information including semantic information as well as long-term operation. The goal of the EUROPA2 project, which builds on top of the results of the successfully completed FP7 project EUROPA, is to bridge this gap and to develop the foundations for robots designed to autonomously navigate in urban environments outdoors as well as in shopping malls and shops, for example, to provide various services to humans. Based on the combination of publicly available maps and the data gathered with the robot's sensors, it will acquire, maintain, and revise a detailed model of the environment including semantic information, detect and track moving objects in the environment, adapt its navigation behavior according to the current situation and anticipate interactions with users during navigation. A central aspect in the project is life-long operation and reduced deployment efforts by avoiding to build maps with the robot before it can operate. EUROPA2 is targeted at developing novel technologies that will open new perspectives for commercial applications of service robots in the future.

The LifeNav project will develop the fundamental approaches required to design mobile robot systems that can reliably operate over extended periods of time in complex and dynamically changing environments.
To achieve this, robots need the ability to learn and update appropriate models of their environment including the dynamic aspects and to effectively incorporate all the information into their decision-making processes.
Within LifeNav we will develop effective and object-oriented three-dimensional representations that cover all aspects of the dynamic environment required for reliable and long-term mobile robot navigation. The outcome of this research will be relevant for all applications that are based on autonomous navigation in real-world scenarios including autonomous robots, mobile manipulation, transportation systems, or autonomous cars.
LifeNav will demonstrate its capabilities in three different scenarios including dynamically changing office and factory environments as well as urban settings and rough terrains. As a challenging test bench, we will send the robot from downtown Freiburg to the near-by mountain Schauinsland with an elevation of 1.260 m which corresponds to a height difference of 1.000 m. The overall path has a length of more than 20 km and includes highly challenging foot paths through the forest with severe GPS outages.

Robotics-enabled Logistics and Assistive Services for the Transformable Factory of the Future (TAPAS) is a project funded by the European Commission within FP7. The goal of TAPAS is to pave the ground for a new generation of transformable solutions to automation and logistics for small and large series production, economic viable and flexible, regardless of changes in volumes and product type.

TAPAS pioneers and validates key components to realize this vision: mobile robots with manipulation arms will automate logistic tasks more flexible and more complete by not only transporting, but also collecting needed parts and delivering them right to the place were needed. TAPAS robots will even go beyond moving parts around the shop floor to create additional value: they will automate assistive tasks that naturally extend the logistic tasks, such as preparatory and post-processing works, e.g., pre-assembly or machine tending with inherent quality control. TAPAS robots might initially be more expensive than other solutions, but through this additional creation of value and by a faster adaptation to changes with new levels of robustness, availability, and completeness of jobs TAPAS robots promise to yield an earlier return of investment.

Visually impaired or blind people have a reduced ability to perceive their environment. Additional appliances such as canes extend their access to the surrounding, but at the downside of a drastically constrained perception radius. Devices equipped with ultrasound sensors reduce collisions but do not solve the problem of protecting the upper parts of the body. Furthermore, most available assisting systems use the auditory channel for information transfer. This is not desirable for visually impaired persons as acoustics play an important role in their process of acquiring spatial orientation. The iVIEW project pursues fast 3D obstacle detection in conjunction with a priority based information reduction as used in autonomous mobile robotics to generate a virtual representation of the surrounding. This information is transferred to the cognitive system of the human via the stimulation of skin receptors. Elaborate training schemes will be established to achieve a substitute spatial perception for blind and non-impaired people. The project is funded by the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung).

Flexible Skill Acquisition and Intuitive Robot Tasking for Mobile Manipulation in the Real World is a project funded by the European Commission within FP7. The goal of First-MM to build the basis for a new generation of autonomous mobile manipulation robots that can flexibly be instructed to perform complex manipulation and transportation tasks. The project will develop a novel robot programming environment that allows even non-expert users to specify complex manipulation tasks in real-world environments. In addition to a task specification language, the environment includes concepts for probabilistic inference and for learning manipulation skills from demonstration and from experience. The project will build upon and extend recent results in robot programming, navigation, manipulation, perception, learning by instruction, and statistical relational learning to develop advanced technology for mobile manipulation robots that can flexibly be instructed even by non-expert users to perform challenging manipulation tasks in real-world environments. designed to autonomously navigate in urban environments outdoors as well as in shopping malls and shops to provide various services to users including guidance, delivery, and transportation.

Spatial Cognition is concerned with the acquisition, organization, utilization and revision of knowledge about spatial environments, be it real or abstract, human or machine. Research issues range from the investigation of human spatial cognition to mobile robot navigation. The goal of the SFB/TR 8 is to investigate the cognitive foundations for human-centered spatial assistance systems. The SFB/TR 8 Spatial Cognition comprises several projects which are structured into the three research areas Reasoning, Action, and Interaction. Reasoning projects are concerned with internal and external representations of space and with inference processes using these representations. Action projects are concerned with the acquisition of information from spatial environments and with actions and behavior in these environments. Interaction projects are concerned with communication about space by means of language and maps.

The goal of the RoboCare project is to build a multi-agent system which generates user services for human assistance. The system is to be implemented on a distributed and heterogeneous platform, consisting of a hardware and software prototype. The use of autonomous robotics and distributed computing technologies constitutes the basis for the implementation of a user service generating system in a closed environment such as a health-care institution or a domestic environment. The fact that robotic components, intelligent systems and human beings are to act in a cooperative setting is what makes the study of such a system challenging, for research and also from the technology integration point of view.