CONTROLLING COGNITIVE DEMANDS WITH SEMI-AUTONOMOUS SUCTION FRAMEWORK FOR ROBOTIC-ASSISTED SURGERY
Robotic-assisted minimally invasive surgery (RMIS) has been steadily increasing since its introduction in the early 2000s and now has become a medical standard of care in multiple surgical specialties. In RMIS, the leading surgeon teleoperates a surgical robot from a console distant from the patient. While on the patient side, there is at least one surgical assistant supporting the procedure by effective handling surgical instruments. One of the most important tasks done by the surgical assistant is blood suction and irrigation. This task is critical to maintaining a clear view of the surgical field and avoid contaminations and infections. When several tasks are competing for the surgical assistant’s attention, taking care of blood suction implies leaving unattended other assistive tasks, such as the exchange of robotic instruments and handling of sutures. An alternative approach to handle bleeding events is having the leading surgeon teleoperate the suction tool. Likewise, this leads to less attention allocated to the patient and an increase in their cognitive load.
This thesis describes a semi-autonomous suction assistant to release the main surgeon of blood suction during a procedure and avoiding the associated cognitive demands of such task. At the heart of this system, there is a deep learning algorithm that segments and identifies the location of blood poolings from the endoscopic camera’s images. With the segmented images, the system extracts navigational information to provide automatic suction, allowing the leading surgeon to focus exclusively on the main task. The system was integrated into a Da Vinci Research Kit robot (DVRK). Additionally, an augmented reality (AR) and a real-time cognitive workload assessment module were developed to improve human-robot work dynamics. The AR module displayed semi-transparent annotations indicating the robot's next target location. These annotations allowed the user to better coordinate his actions with the surgical robot. The cognitive workload assessment module allowed to classify the users' mental state into low and high cognitive workload. Using this information, the robotic assistant provided suction during periods of high mental demands.
To evaluate the proposed framework, a computational experiment and two user studies were conducted. The goal of the computational experiment was to assess the prediction performance of the proposed cognitive workload detection system under two different modalities: single-user models and multi-user models. In the single-user modality, an average classification accuracy of $76\%$ was achieved. This result demonstrates how to effectively use EEG and eye-tracker features to predict cognitive states in RMIS procedures. In the first user study, the goal was to evaluate the capability of the autonomous framework to improve the user's surgical performance. This was achieved by comparing the autonomous system against a condition of manual teleoperation of the suction tool. The study's main finding was a reduction in the completion time and the reported workload demands when using the autonomous system. The goal in the second study was to evaluate the integration of the autonomous system with the cognitive workload framework. In this setting, the robotic assistant would only act when the user's mental state was classified as high cognitive workload. To achieve the study's goal, the autonomous system was assessed against manual teleoperation of the suction tool. The main results for this study show a reduction in the completion time and improved human-robot collaboration fluency. Overall, the experiments' results shows how objective and real-time assessment of cognitive load can be used together with surgical autonomy to enhance RMIS surgical outcomes.