Current robotic systems are pre-programmed to deal with very specific tasks on a limited set of objects very well-defined in terms of location, shape, size and material properties. To deal with variability and enable flexibility, robotic systems need to reason about the objects in their environment, or world. To that end, they need (to build) a knowledge base of that environment, a so-called world model. A world model is the digi- tal and structured representation of the robot’s external environment, which allows the robot to reason about this environment and to interact with it. In this project, the scientific challenge is to develop a world model with different, task-centric levels of abstraction while sensor data and prior models are uncertain and incomplete. Semantic representations of objects and their affordances will be tracked in space and time to facilitate active perception and task planning in the presence of variation.
Program coherence: The world model (P2) is based on sensor input provided by perception (P1). At the same time, the world model (P2) provides information to guide active perception (P1). Planning and control of the robot (P3) is based on the world model (P2). The world model (P2) provides information for gripping and ma- nipulation (P4), in terms of geometric and material properties of the manipulated object. The world model will be built on prior knowledge provided by the three use-case projects (P5, P6, P7). During integration, the world model will be instantiated in the three use cases (P5, P6, P7).
Research team: TUe-CS, WUR-FT
User involvement: Priva, Marel, Aris, Houdijk, BluePrint Automation, Cerescon