Singleview

 Tue, 01. Feb 2022   Liebers, Carina

Master-Projektgruppe: MultiVeRse (SoSe2022)

Auch im Sommersemester 2022 bieten wir erneut eine Projektgruppe für Masterstudierende an. MultiVeRse widmet sich einem bekannten Problem des Machine Learnings: Das Simulationsumgebungen häufig nicht alle Möglichkeiten der Realität abbilden, wodurch eine Diskrepanz zwischen dem Training und der späteren Anwendung entsteht. In der Projektgruppe werden wir erforschen, wie Objekte in der virtuellen Realität durch menschliche Interaktion um ihre Bewegungsmöglichkeiten und Beziehungen zueinander erweitert werden können. Diese dienen als Grundlage, um automatisch verschiedenste Simulationsumgebungen zu generieren. Anschließend folgt die englische Kurzbeschreibung der Projektgruppe.

MultiVeRse - Creating Possible Real-world Environments in Virtual Reality for Robot Training

Machine learning can be used to train robots on different problems. Training environments that simulate future deployment scenarios are often used for this purpose. However, these environment simulations only represent one state of possible reality, leading to erroneous behavior of the trained robot if the real environment differs from the simulated environment. To prevent erroneous behavior, the project group will investigate the possibility of diversifying the simulated environments by expert input in virtual reality.

Its goal is to be able to input different possible states of reality in VR through appropriate interactions and display them to the users. This includes segmentation and naming of relevant objects, relationships among objects, and their movement possibilities. For segmentation and naming, a 3D scan of the environment acquired by sensors is divided into individual relevant objects, which are assigned to corresponding labels. Subsequently, the relationships of the objects among each other shall be entered (e.g. the PC monitor is assigned to the table). As a last step, it should be possible to enter the movement possibilities of the individual objects (e.g. drawer can be pulled out). The environment representations obtained in this way can then be used for robot training.