Team

Team:

Bio:

Max is a research assistant at the TU Dortmund University (Prof. Dr. Jens Gerken) in cooperation with the University of Duisburg-Essen in the field of Human-Robot Interaction (Max' PhD thesis). His research interests and experiences cover a broad range of topics, such as novel interaction techniques in human-robot collaborations, intervention strategies in autonomous robot tasks, distributed systems, virtual/augmented/mixed reality, and artificial intelligence.

Curriculum Vitae:

PDF version

Fields of Research:

  • Human-Robot Collaboration
  • Intervention Strategies/ Interfaces
  • Multimodal Input & Feedback Technologies
  • Augemented/Mixed/Virtual Reality
  • Assistive Technologies

Projects:

Publications:

Filter:
  • Kassem, Khaled; Saad, Alia; Pascher, Max; Schett, Martin; Michahelles, Florian: Push Me: Evaluating Usability and User Experience in Nudge-based Human-Robot Interaction through Embedded Force and Torque Sensors. In: Proceedings of Mensch Und Computer 2024. Association for Computing Machinery, New York, NY, USA 2024, p. 399-407. doi:10.1145/3670653.3677487PDFFull textCitationDetails
    Push Me: Evaluating Usability and User Experience in Nudge-based Human-Robot Interaction through Embedded Force and Torque Sensors

    Robots are expected to be integrated into human workspaces, which makes the development of effective and intuitive interaction crucial. While vision- and speech-based robot interfaces have been well studied, direct physical interaction has been less explored. However, HCI research has shown that direct manipulation interfaces provide more intuitive and satisfying user experiences, compared to other interaction modes. This work examines how built-in force/torque sensors in robots can facilitate direct manipulation through nudge-based interactions. We conducted a user study (N = 23) to compare this haptic approach with traditional touchscreen interfaces, focusing on workload, user experience, and usability. Our results show that haptic interactions are more engaging and intuitive but also more physically demanding compared to touchscreen interaction. These findings have implications for the design of physical human-robot interaction interfaces. Given the benefits of physical interaction highlighted in our study, we recommend that designers incorporate this interaction method for human-robot interaction, especially at close quarters.

  • Goldau, Felix Ferdinand; Pascher, Max; Baumeister, Annalies; Tolle, Patrizia; Gerken, Jens; Frese, Udo: Adaptive Control in Assistive Application - A Study Evaluating Shared Control by Users with Limited Upper Limb Mobility. In: RO-MAN 2024: 33rd IEEE International Conference on Robot and Human Interactive Communication,. IEEE, Pasadena, California, US 2024. doi:10.48550/arXiv.2406.06103PDFCitationDetails
    Adaptive Control in Assistive Application - A Study Evaluating Shared Control by Users with Limited Upper Limb Mobility

    Shared control in assistive robotics blends human autonomy with computer assistance, thus simplifying complex tasks for individuals with physical impairments. This study assesses an adaptive Degrees of Freedom control method specifically tailored for individuals with upper limb impairments. It employs a between-subjects analysis with 24 participants, conducting 81 trials across three distinct input devices in a realistic everyday-task setting. Given the diverse capabilities of the vulnerable target demographic and the known challenges in statistical comparisons due to individual differences, the study focuses primarily on subjective qualitative data. The results reveal consistently high success rates in trial completions, irrespective of the input device used. Participants appreciated their involvement in the research process, displayed a positive outlook, and quick adaptability to the control system. Notably, each participant effectively managed the given task within a short time frame.

  • Pascher, Max: An Interaction Design for AI-enhanced Assistive Human-Robot Collaboration (1). 2024. (ISBN 978-3-00-079648-7) doi:10.17185/duepublico/82229Full textCitationDetails
    An Interaction Design for AI-enhanced Assistive Human-Robot Collaboration

    The global population of individuals with motor impairments faces substantial challenges, including reduced mobility, social exclusion, and increased caregiver dependency. While advances in assistive technologies can augment human capabilities, independence, and overall well-being by alleviating caregiver fatigue and care receiver weariness, target user involvement regarding their needs and lived experiences in the ideation, development, and evaluation process is often neglected. Further, current interaction design concepts often prove unsatisfactory, posing challenges to user autonomy and system usability, hence resulting in additional stress for end users. Here, the advantages of Artificial Intelligence (AI) can enhance accessibility of assistive technology. As such, a notable research gap exists in the development and evaluation of interaction design concepts for AI-enhanced assistive robotics.

    This thesis addresses the gap by streamlining the development and evaluation of shared control approaches while enhancing user integration through three key contributions. Firstly, it identifies user needs for assistive technologies and explores concepts related to robot motion intent communication. Secondly, it introduces the innovative shared control approach Adaptive DoF Mapping Control (ADMC), which generates mappings of a robot’s Degrees-of-Freedom (DoFs) based on situational Human-Robot Interaction (HRI) tasks and suggests them to users. Thirdly, it presents and evaluates the Extended Reality (XR) framework AdaptiX for in-silico development and evaluation of multi-modal interaction designs and feedback methods for shared control applications.

    In contrast to existing goal-oriented shared control approaches, my work highlights the development of a novel concept that does not rely on computing trajectories for known movement goals. Instead of pre-determined goals, ADMC utilises its inherent rule engine – for example, a Convolutional Neural Network (CNN), the robot arm’s posture, and a colour-and-depth camera feed of the robot’s gripper surroundings. This approach facilitates a more flexible and situationally aware shared control system.

    The evaluations within this thesis demonstrate that the ADMC approach significantly reduces task completion time, average number of necessary switches between DoF mappings, and perceived workload of users, compared to a non-adaptive input method utilising cardinal DoFs. Further, the effectiveness of AdaptiX for evaluations in-silico as well as real-world scenarios has been shown in one remote and two laboratory user studies.

    The thesis emphasises the transformative impact of assistive technologies for individuals with motor impairments, stressing the importance of user-centred design and legible AI-enhanced shared control applications, as well as the benefits of in-silico testing. Further, it also outlines future research opportunities with a focus on refining communication methods, extending the application of approaches like ADMC, and enhancing tools like AdaptiX to accommodate diverse tasks and scenarios. Addressing these challenges can further advance AI-enhanced assistive robotics, promoting the full inclusion of individuals with physical impairments in social and professional spheres.

  • Pascher, Max; Goldau, Felix Ferdinand; Kronhardt, Kirill; Frese, Udo; Gerken, Jens: AdaptiX – A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics. In: Proc. ACM Hum.-Comput. Interact., Vol 8 (2024) No EICS, p. 1-28. doi:10.1145/3660243PDFFull textCitationDetails
    AdaptiX – A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics

    With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at https://adaptix.robot-research.de.

    -----
    This research received the Best Paper Award

  • Lakhnati, Younes; Pascher, Max; Gerken, Jens: Exploring a GPT-based large language model for variable autonomy in a VR-based human-robot teaming simulation. In: Frontiers in Robotics and AI, Vol 2024 (2024) No 11. doi:10.3389/frobt.2024.1347538PDFFull textCitationDetails
    Exploring a GPT-based large language model for variable autonomy in a VR-based human-robot teaming simulation

    In a rapidly evolving digital landscape autonomous tools and robots are becoming commonplace. Recognizing the significance of this development, this paper explores the integration of Large Language Models (LLMs) like Generative pre-trained transformer (GPT) into human-robot teaming environments to facilitate variable autonomy through the means of verbal human-robot communication. In this paper, we introduce a novel simulation framework for such a GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality (VR) setting. This system allows users to interact with simulated robot agents through natural language, each powered by individual GPT cores. By means of OpenAI’s function calling, we bridge the gap between unstructured natural language input and structured robot actions. A user study with 12 participants explores the effectiveness of GPT-4 and, more importantly, user strategies when being given the opportunity to converse in natural language within a simulated multi-robot environment. Our findings suggest that users may have preconceived expectations on how to converse with robots and seldom try to explore the actual language and cognitive capabilities of their simulated robot collaborators. Still, those users who did explore were able to benefit from a much more natural flow of communication and human-like back-and-forth. We provide a set of lessons learned for future research and technical implementations of similar systems.

  • Nanavati, Amal; Pascher, Max; Ranganeni, Vinitha; Gordon, Ethan K.; Faulkner, Taylor Kessler; Srinivasa, Siddhartha S.; Cakmak, Maya; Alves-Oliveira, Patrícia; Gerken, Jens: Multiple Ways of Working with Users to Develop Physically Assistive Robots. In: A3DE '24: Workshop on Assistive Applications, Accessibility, and Disability Ethics at the ACM/IEEE International Conference on Human-Robot Interaction. 2024. doi:10.48550/arXiv.2403.00489PDFCitationDetails
    Multiple Ways of Working with Users to Develop Physically Assistive Robots

    Despite the growth of physically assistive robotics (PAR) research over the last decade, nearly half of PAR user studies do not involve participants with the target disabilities. There are several reasons for this---recruitment challenges, small sample sizes, and transportation logistics---all influenced by systemic barriers that people with disabilities face. However, it is well-established that working with end-users results in technology that better addresses their needs and integrates with their lived circumstances. In this paper, we reflect on multiple approaches we have taken to working with people with motor impairments across the design, development, and evaluation of three PAR projects: (a) assistive feeding with a robot arm; (b) assistive teleoperation with a mobile manipulator; and (c) shared control with a robot arm. We discuss these approaches to working with users along three dimensions---individual- vs. community-level insight, logistic burden on end-users vs. researchers, and benefit to researchers vs. community---and share recommendations for how other PAR researchers can incorporate users into their work.

  • Wozniak, Maciej K.; Pascher, Max; Ikeda, Bryce; Luebbers, Matthew B.; Jena, Ayesha: Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI). In: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24 Companion), March 11--14, 2024, Boulder, CO, USA. ACM, Boulder, Colorado, USA 2024. doi:10.1145/3610978.3638158PDFCitationDetails
    Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI)

    The 7th International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI) seeks to bring together researchers from human-robot interaction (HRI), robotics, and mixed reality (MR) to address the challenges related to mixed reality interactions between humans and robots. Key topics include the development of robots capable of interacting with humans in mixed reality, the use of virtual reality for creating interactive robots, designing augmented reality interfaces for communication between humans and robots, exploring mixed reality interfaces for enhancing robot learning, comparative analysis of the capabilities and perceptions of robots and virtual agents, and sharing best design practices. VAM-HRI 2024 will build on the success of VAM-HRI workshops held from 2018 to 2023, advancing research in this specialized community. The prior year’s website is located at vam-hri.github.io.

  • Pascher, Max; Zinta, Kevin; Gerken, Jens: Exploring of Discrete and Continuous Input Control for AI-enhanced Assistive Robotic Arms. In: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24 Companion), March 11--14, 2024, Boulder, CO, USA. ACM, Boulder, Colorado, USA 2024. doi:10.1145/3610978.3640626PDFCitationDetails
    Exploring of Discrete and Continuous Input Control for AI-enhanced Assistive Robotic Arms

    Robotic arms, integral in domestic care for individuals with motor impairments, enable them to perform Activities of Daily Living (ADLs) independently, reducing dependence on human caregivers. These collaborative robots require users to manage multiple Degrees-of-Freedom (DoFs) for tasks like grasping and manipulating objects. Conventional input devices, typically limited to two DoFs, necessitate frequent and complex mode switches to control individual DoFs. Modern adaptive controls with feed-forward multi-modal feedback reduce the overall task completion time, number of mode switches, and cognitive load. Despite the variety of input devices available, their effectiveness in adaptive settings with assistive robotics has yet to be thoroughly assessed. This study explores three different input devices by integrating them into an established XR framework for assistive robotics, evaluating them and providing empirical insights through a preliminary study for future developments.

  • Pascher, Max; Saad, Alia; Liebers, Jonathan; Heger, Roman; Gerken, Jens; Schneegass, Stefan; Gruenefeld, Uwe: Hands-On Robotics: Enabling Communication Through Direct Gesture Control. In: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (HRI '24 Companion), March 11--14, 2024, Boulder, CO, USA. ACM, Boulder, Colorado, USA 2024. doi:10.1145/3610978.3640635PDFCitationDetails
    Hands-On Robotics: Enabling Communication Through Direct Gesture Control

    Effective Human-Robot Interaction (HRI) is fundamental to seamlessly integrating robotic systems into our daily lives. However, current communication modes require additional technological interfaces, which can be cumbersome and indirect. This paper presents a novel approach, using direct motion-based communication by moving a robot's end effector. Our strategy enables users to communicate with a robot by using four distinct gestures -- two handshakes ('formal' and 'informal') and two letters ('W' and 'S'). As a proof-of-concept, we conducted a user study with 16 participants, capturing subjective experience ratings and objective data for training machine learning classifiers. Our findings show that the four different gestures performed by moving the robot's end effector can be distinguished with close to 100% accuracy. Our research offers implications for the design of future HRI interfaces, suggesting that motion-based interaction can empower human operators to communicate directly with robots, removing the necessity for additional hardware.

  • Pascher, Max: System and Method for Providing an Object-related Haptic Effect, German Patent and Trade Mark Office (DPMA), 2024. Full textCitationDetails
    System and Method for Providing an Object-related Haptic Effect
  • Saad, Alia; Pascher, Max; Kassem, Khaled; Heger, Roman; Liebers, Jonathan; Schneegass, Stefan; Gruenefeld, Uwe: Hand-in-Hand: Investigating Mechanical Tracking for User Identification in Cobot Interaction. In: Proceedings of International Conference on Mobile and Ubiquitous Multimedia (MUM). Vienna, Austria 2023. doi:10.1145/3626705.3627771PDFCitationDetails
    Hand-in-Hand: Investigating Mechanical Tracking for User Identification in Cobot Interaction

    Robots play a vital role in modern automation, with applications in manufacturing and healthcare. Collaborative robots integrate human and robot movements. Therefore, it is essential to ensure that interactions involve qualified, and thus identified, individuals. This study delves into a new approach: identifying individuals through robot arm movements. Different from previous methods, users guide the robot, and the robot senses the movements via joint sensors. We asked 18 participants to perform six gestures, revealing the potential use as unique behavioral traits or biometrics, achieving F1-score up to 0.87, which suggests direct robot interactions as a promising avenue for implicit and explicit user identification.

  • Pascher, Max; Kronhardt, Kirill; Goldau, Felix Ferdinand; Frese, Udo; Gerken, Jens: In Time and Space: Towards Usable Adaptive Control for Assistive Robotic Arms. In: RO-MAN 2023 - IEEE International Conference on Robot and Human Interactive Communication. IEEE, Busan, Korea 2023, p. 2300-2307. doi:10.1109/RO-MAN57019.2023.10309381PDFCitationDetails
    In Time and Space: Towards Usable Adaptive Control for Assistive Robotic Arms

    Robotic solutions, in particular robotic arms, are becoming more frequently deployed for close collaboration with humans, for example in manufacturing or domestic care environments. These robotic arms require the user to control several Degrees-of-Freedom (DoFs) to perform tasks, primarily involving grasping and manipulating objects. Standard input devices predominantly have two DoFs, requiring time-consuming and cognitively demanding mode switches to select individual DoFs. Contemporary Adaptive DoF Mapping Controls (ADMCs) have shown to decrease the necessary number of mode switches but were up to now not able to significantly reduce the perceived workload. Users still bear the mental workload of incorporating abstract mode switching into their workflow. We address this by providing feed-forward multimodal feedback using updated recommendations of ADMC, allowing users to visually compare the current and the suggested mapping in real-time. We contrast the effectiveness of two new approaches that a) continuously recommend updated DoF combinations or b) use discrete thresholds between current robot movements and new recommendations. Both are compared in a Virtual Reality (VR) in-person study against a classic control method. Significant results for lowered task completion time, fewer mode switches, and reduced perceived workload conclusively establish that in combination with feedforward, ADMC methods can indeed outperform classic mode switching. A lack of apparent quantitative differences between Continuous and Threshold reveals the importance of user-centered customization options. Including these implications in the development process will improve usability, which is essential for successfully implementing robotic technologies with high user acceptance.

  • Pascher, Max; Grünefeld, Uwe; Schneegass, Stefan; Gerken, Jens: How to Communicate Robot Motion Intent: A Scoping Review. In: Acm (Ed.): Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23). 2023. doi:10.1145/3544548.3580857PDFFull textCitationDetails
    How to Communicate Robot Motion Intent: A Scoping Review

    Robots are becoming increasingly omnipresent in our daily lives, supporting us and carrying out autonomous tasks. In Human-Robot Interaction, human actors benefit from understanding the robot's motion intent to avoid task failures and foster collaboration. Finding effective ways to communicate this intent to users has recently received increased research interest. However, no common language has been established to systematize robot motion intent. This work presents a scoping review aimed at unifying existing knowledge. Based on our analysis, we present an intent communication model that depicts the relationship between robot and human through different intent dimensions (intent type, intent information, intent location). We discuss these different intent dimensions and their interrelationships with different kinds of robots and human roles. Throughout our analysis, we classify the existing research literature along our intent communication model, allowing us to identify key patterns and possible directions for future research.

  • Pascher, Max; Franzen, Til; Kronhardt, Kirill; Grünefeld, Uwe; Schneegass, Stefan; Gerken, Jens: HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional Cues. In: Acm (Ed.): Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems - Extended Abstract (CHI ’23). 2023. doi:10.1145/3544549.3585601PDFFull textCitationDetails
    HaptiX: Vibrotactile Haptic Feedback for Communication of 3D Directional Cues

    In Human-Computer-Interaction, vibrotactile haptic feedback offers the advantage of being independent of any visual perception of the environment. Most importantly, the user's field of view is not obscured by user interface elements, and the visual sense is not unnecessarily strained. This is especially advantageous when the visual channel is already busy, or the visual sense is limited. We developed three design variants based on different vibrotactile illusions to communicate 3D directional cues. In particular, we explored two variants based on the vibrotactile illusion of the cutaneous rabbit and one based on apparent vibrotactile motion. To communicate gradient information, we combined these with pulse-based and intensity-based mapping. A subsequent study showed that the pulse-based variants based on the vibrotactile illusion of the cutaneous rabbit are suitable for communicating both directional and gradient characteristics. The results further show that a representation of 3D directions via vibrations can be effective and beneficial.

  • Pascher, Max; Kronhardt, Kirill; Franzen, Til; Gerken, Jens: Adaptive DoF: Concepts to Visualize AI-generated Movements in Human-Robot Collaboration. In: Proceedings of the 2022 International Conference on Advanced Visual Interfaces (AVI 2022). ACM, NewYork, NY, USA 2022. doi:10.1145/3531073.3534479CitationDetails
    Adaptive DoF: Concepts to Visualize AI-generated Movements in Human-Robot Collaboration

    Nowadays, robots collaborate closely with humans in a growing number of areas. Enabled by lightweight materials and safety sensors , these cobots are gaining increasing popularity in domestic care, supporting people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior. This, however, is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their motion intent and comprehending how they "think" about their actions. We work on solutions that communicate the cobots AI-generated motion intent to a human collaborator. Effective communication enables users to proceed with the most suitable option. We present a design exploration with different visualization techniques to optimize this user understanding, ideally resulting in increased safety and end-user acceptance.

  • Pascher, Max; Kronhardt, Kirill; Franzen, Til; Gruenefeld, Uwe; Schneegass, Stefan; Gerken, Jens: My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. In: MDPI Sensors, Vol 22 (2022). doi:10.3390/s22030755Full textCitationDetails

    Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they "see" the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot's surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.

  • Kronhardt, Kirill; Rübner, Stephan; Pascher, Max; Goldau, Felix Ferdinand; Frese, Udo; Gerken, Jens: Adapt or Perish? Exploring the Effectiveness of Adaptive DoF Control Interaction Methods for Assistive Robot Arms. In: Technologies, Vol 10 (2022). doi:10.3390/technologies10010030Full textCitationDetails

    Robot arms are one of many assistive technologies used by people with motor impairments. Assistive robot arms can allow people to perform activities of daily living (ADL) involving grasping and manipulating objects in their environment without the assistance of caregivers. Suitable input devices (e.g., joysticks) mostly have two Degrees of Freedom (DoF), while most assistive robot arms have six or more. This results in time-consuming and cognitively demanding mode switches to change the mapping of DoFs to control the robot. One option to decrease the difficulty of controlling a high-DoF assistive robot arm using a low-DoF input device is to assign different combinations of movement-DoFs to the device\’s input DoFs depending on the current situation (adaptive control). To explore this method of control, we designed two adaptive control methods for a realistic virtual 3D environment. We evaluated our methods against a commonly used non-adaptive control method that requires the user to switch controls manually. This was conducted in a simulated remote study that used Virtual Reality and involved 39 non-disabled participants. Our results show that the number of mode switches necessary to complete a simple pick-and-place task decreases significantly when using an adaptive control type. In contrast, the task completion time and workload stay the same. A thematic analysis of qualitative feedback of our participants suggests that a longer period of training could further improve the performance of adaptive control methods.

  • Arboleda, S. A.; Pascher, Max; Lakhnati, Y.; Gerken, Jens: Understanding Human-Robot Collaboration for People with Mobility Impairments at the Workplace, a Thematic Analysis. In: 29th IEEE International Conference on Robot and Human Interactive Communication. ACM, 2021. doi:10.1109/RO-MAN47096.2020.9223489.CitationDetails

    Assistive technologies such as human-robot collaboration, have the potential to ease the life of people with physical mobility impairments in social and economic activities. Currently, this group of people has lower rates of economic participation, due to the lack of adequate environments adapted to their capabilities. We take a closer look at the needs and preferences of people with physical mobility impairments in a human-robot cooperative environment at the workplace. Specifically, we aim to design how to control a robotic arm in manufacturing tasks for people with physical mobility impairments. We present a case study of a sheltered-workshop as a prototype for an institution that employs people with disabilities in manufacturing jobs. Here, we collected data of potential end-users with physical mobility impairments, social workers, and supervisors using a participatory design technique (Future-Workshop). These stakeholders were divided into two groups, primary (end-users) and secondary users (social workers, supervisors), which were run across two separate sessions. The gathered information was analyzed using thematic analysis to reveal underlying themes across stakeholders. We identified concepts that highlight underlying concerns related to the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. In this paper, we present our findings and discuss the implications of each theme when shaping an inclusive human-robot cooperative workstation for people with physical mobility impairments.

  • Pascher, Max; Baumeister, Annalies; Schneegass, Stefan; Klein, Barbara; Gerken, Jens: Recommendations for the Development of a Robotic Drinking and Eating Aid - An Ethnographic Study. In: Ardito, Carmelo; Lanzilotti, Rosa; Malizia, Alessio; Petrie, Helen; Piccinno, Antonio; Desolda, Giuseppe; Inkpen, Kori (Ed.): Human-Computer Interaction -- INTERACT 2021. Springer International Publishing, Cham 2021, p. 331-351. CitationDetails

    Being able to live independently and self-determined in one's own home is a crucial factor or human dignity and preservation of self-worth. For people with severe physical impairments who cannot use their limbs for every day tasks, living in their own home is only possible with assistance from others. The inability to move arms and hands makes it hard to take care of oneself, e.g. drinking and eating independently. In this paper, we investigate how 15 participants with disabilities consume food and drinks. We report on interviews, participatory observations, and analyzed the aids they currently use. Based on our findings, we derive a set of recommendations that supports researchers and practitioners in designing future robotic drinking and eating aids for people with disabilities.

  • Arevalo Arboleda, Stephanie; Pascher, Max; Baumeister, Annalies; Klein, Barbara; Gerken, Jens: Reflecting upon Participatory Design in Human-Robot Collaboration for People with Motor Disabilities: Challenges and Lessons Learned from Three Multiyear Projects. In: The 14th PErvasive Technologies Related to Assistive Environments Conference. Association for Computing Machinery, New York, NY, USA 2021, p. 147-155. doi:10.1145/3453892.3458044CitationDetails

    Human-robot technology has the potential to positively impact the lives of people with motor disabilities. However, current efforts have mostly been oriented towards technology (sensors, devices, modalities, interaction techniques), thus relegating the user and their valuable input to the wayside. In this paper, we aim to present a holistic perspective of the role of participatory design in Human-Robot Collaboration (HRC) for People with Motor Disabilities (PWMD). We have been involved in several multiyear projects related to HRC for PWMD, where we encountered different challenges related to planning and participation, preferences of stakeholders, using certain participatory design techniques, technology exposure, as well as ethical, legal, and social implications. These challenges helped us provide five lessons learned that could serve as a guideline to researchers when using participatory design with vulnerable groups. In particular, early-career researchers who are starting to explore HRC research for people with disabilities.

  • Borsum, Florian; Pascher, Max; Auda, Jonas; Schneegass, Stefan; Lux, Gregor; Gerken, Jens: Stay on Course in VR: Comparing the Precision of Movement between Gamepad, Armswinger, and Treadmill: Kurs Halten in VR: Vergleich Der Bewegungspräzision von Gamepad, Armswinger Und Laufstall, Association for Computing Machinery, New York, NY, USA 2021. (ISBN 9781450386456) doi:10.1145/3473856.3473880) CitationDetails

    In diesem Beitrag wird untersucht, inwieweit verschiedene Formen von Fortbewegungstechniken in Virtual Reality Umgebungen Einfluss auf die Präzision bei der Interaktion haben. Dabei wurden insgesamt drei Techniken untersucht: Zwei der Techniken integrieren dabei eine körperliche Aktivität, um einen hohen Grad an Realismus in der Bewegung zu erzeugen (Armswinger, Laufstall). Als dritte Technik wurde ein Gamepad als Baseline herangezogen. In einer Studie mit 18 Proband:innen wurde die Präzision dieser drei Fortbewegungstechniken über sechs unterschiedliche Hindernisse in einem VR-Parcours untersucht. Die Ergebnisse zeigen, dass für einzelne Hindernisse, die zum einen eine Kombination aus Vorwärts- und Seitwärtsbewegung erfordern (Slalom, Klippe) sowie auf Geschwindigkeit abzielen (Schiene), der Laufstall eine signifikant präzisere Steuerung ermöglicht als der Armswinger. Auf den gesamten Parcours gesehen ist jedoch kein Eingabegerät signifikant präziser als ein anderes. Die Benutzung des Laufstalls beötigt zudem signifikant mehr Zeit als Gamepad und Armswinger. Ebenso zeigte sich, dass das Ziel, eine reale Laufbewegung 1:1 abzubilden, auch mit einem Laufstall nach wie vor nicht erreicht wird, die Bewegung aber dennoch als intuitiv und immersiv wahrgenommen wird.

  • Jonas Auda, Max Pascher; Schneegass, Stefan: Around the (Virtual) World - Infinite Walking in Virtual Reality Using Electrical Muscle Stimulation. In: Acm (Ed.): CHI'19 Proceedings. Glasgow 2019. doi:10.1145/3290605.3300661PDFCitationDetails
    Around the (Virtual) World

    Virtual worlds are infinite environments in which the user can move around freely. When shifting from controller-based movement to regular walking as an input, the limitation of the real world also limits the virtual world. Tackling this challenge, we propose the use of electrical muscle stimulation to limit the necessary real-world space to create an unlimited walking experience. We thereby actuate the users‘ legs in a way that they deviate from their straight route and thus, walk in circles in the real world while still walking straight in the virtual world. We report on a study comparing this approach to vision shift – the state-of-the-art approach – as well as combining both approaches. The results show that particularly combining both approaches yield high potential to create an infinite walking experience.

  • Pascher, Max; Schneegass, Stefan; Gerken, Jens: SwipeBuddy. In: Lamas, David; Loizides, Fernando; Nacke, Lennart; Petrie, Helen; Winckler, Marco; Zaphiris, Panayiotis (Ed.): Human-Computer Interaction -- INTERACT 2019. Springer International Publishing, Cham 2019, p. 568-571. CitationDetails

    Mobile devices are the core computing platform we use in our everyday life to communicate with friends, watch movies, or read books. For people with severe physical disabilities, such as tetraplegics, who cannot use their hands to operate such devices, these devices are barely usable. Tackling this challenge, we propose SwipeBuddy, a teleoperated robot allowing for touch interaction with a smartphone, tablet, or ebook-reader. The mobile device is mounted on top of the robot and can be teleoperated by a user through head motions and gestures controlling a stylus simulating touch input. Further, the user can control the position and orientation of the mobile device. We demonstrate the SwipeBuddy robot device and its different interaction capabilities.

  • Arévalo-Arboleda, Stephanie; Pascher, Max; Gerken, Jens: Opportunities and Challenges in Mixed-Reality for an Inclusive Human-Robot Collaboration Environment. In: Proceedings of the 2018 International Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI) as part of the ACM/IEEE Conference on Human-Robot Interaction. Chicago, USA 2018. CitationDetails

    This paper presents an approach to enhance robot control using Mixed-Reality. It highlights the opportunities and challenges in the interaction design to achieve a Human-Robot Collaborative environment. In fact, Human-Robot Collaboration is the perfect space for social inclusion. It enables people, who suffer severe physical impairments, to interact with the environment by providing them movement control of an external robotic arm. Now, when discussing about robot control it is important to reduce the visual-split that different input and output modalities carry. Therefore, Mixed-Reality is of particular interest when trying to ease communication between humans and robotic systems.

  • Pascher, Max: Praxisbeispiel Digitalisierung konkret: Wenn der Stromzähler weiß, ob es Oma gut geht. Beschreibung des minimalinvasiven Frühwarnsystems „ZELIA“. In: Wege in die digitale Zukunft - Was bedeuten Smart Living, Big Data, Robotik & Co für die Sozialwirtschaft? S. 137-148. Nomos Verlagsgesellschaft mbH & Co. KG, . CitationDetails
  • Pascher, Max; Baumeister, Annalies; Klein, Barbara; Schneegass, Stefan; Gerken, Jens: Little Helper: A Multi-Robot System in Home Health Care Environments. In: Ecole Nationale de l'Aviation Civile [ENAC] . CitationDetails

    Being able to live independently and self-determined in once own home is a crucial factor for social participation. For people with severe physical impairments, such as tetraplegia, who cannot use their hands to manipulate materials or operate devices, life in their own home is only possible with assistance from others. The inability to operate buttons and other interfaces results also in not being able to utilize most assistive technologies on their own. In this paper, we present an ethnographic field study with 15 tetraplegics to better understand their living environments and needs. Results show the potential for robotic solutions but emphasize the need to support activities of daily living (ADL), such as grabbing and manipulating objects or opening doors. Based on this, we propose Little Helper, a tele-operated pack of robot drones, collaborating in a divide and conquer paradigm to fulfill several tasks using a unique interaction method. The drones can be tele-operated by a user through gaze-based selection and head motions and gestures manipulating materials and applications.

Memberships:

  • ACM
  • SIGCHI