Most of the current robotic systems are based on a traditional sense-plan-act cycle, where perception and action are viewed as individual processes. This approach, however, cannot deal with the challenges that the agro-food environment poses. As perception and action are tightly coupled through the interactions of the robot with the environment, the scientific challenge is to develop and study active perception methodologies to enable robots to resolve uncertainty by actively gathering new sensory input by changing perspective or manipulating objects. Building on progress in Deep Learning (DL), new DL architectures will be developed for 3D reconstruction, semantic object/scene segmentation and estimation of 3D geometrical object properties needed for gripping and manipulation.
Program coherence: P1 is closely connected to P2 as perception feeds the world model and the world model guides the perception. P1 connects to P3, since active perception requires suitable robot planning and control. Perception builds on multiple sensing modalities, vision, 3D and tactile. The tactile input is provided by the gripper developed in P4. P1 provides the active perception capabilities for use-case projects P5, P6 and P7.
Research team: UvA-II, WUR-FT
User involvement: Marel, Priva, 3D Universum, Aris, Houdijk, BluePrint Automation, Cerescon.