Measuring the Effect of Task-Irrelevant Visuals in Augmented Reality
Augmented reality (AR) allows people to view digital information overlaid on to real-world objects. While the technology is still new, it is currently being used in places such as the military and industrial assembly operations in the form of ocular devices worn on the head over the eyes. Head-mounted displays (HMDs) let people always see AR information in their field of view no matter where their head is positioned. Studies have shown that HMDs displaying information directly related to the immediate task can decreased cognitive workload and increase the speed and accuracy of task performance. However, task-irrelevant information has shown to decrease performance and accuracy of the primary task and also hinder the efficiency of processing the irrelevant information. This has been investigated in industry settings but less so in an everyday consumer context. This study proposes comparing two types of visual information (text and shapes) in AR displayed on an HMD to answer the following questions: 1) when content is of importance, which visual notification (text or shapes) is processed faster while degrading the performance of the primary task the least? And 2) When presence is of importance, which visual notification (text or shapes) is processed faster while degrading the performance of the primary task the least?