Individuals with disabilities and persons operating in inaccessible environments can greatly benefit from the aid of robotic manipulators in performing activities of daily living (ADLs) and other remote tasks. Users relying on robotic manipulators to interact with their environment are restricted by the lack of sensory information available through traditional operator interfaces. These interfaces only allow visual task access and deprive users of somatosensory feedback that would be available through direct contact. Multimodal sensory feedback can bridge these perceptual gaps effectively. Given a set of object properties (e.g. temperature, weight) to be conveyed and sensory modalities (e.g. visual, haptic) available, it is necessary to determine which modality should be assigned to each property for an effective interface design. However, the effectiveness of assigning properties to modalities has varied with application and context. The goal of this study was to develop an effective multisensory interface for robot-assisted pouring tasks, which delivers nuanced sensory feedback while permitting high visual demand necessary for precise teleoperation. To that end, an optimization approach is employed to generate a combination of feedback properties to modality assignments that maximizes effective feedback perception and minimizes cognitive load. A set of screening experiments tested twelve possible individual assignments to form the combination. Resulting perceptual accuracy, load, and user preference measures were input into a cost function. Formulating and solving as a linear assignment problem, a minimum cost combination was generated. Results from experiments evaluating efficacy in practical use cases for pouring tasks indicate that the solution is significantly more effective than no feedback and has considerable advantage over an arbitrary design.
Advisor/Supervisor/Committee Chair
Bradley DuerstockAdvisor/Supervisor/Committee co-chair
Juan WachsAdditional Committee Member 2
Ramses Martinez