Orlosky, JasonToyama, TakumiSonntag, DanielKiyokawa, Kiyoshi2018-01-082018-01-0820162016https://dl.gi.de/handle/20.500.12116/11541Developing more natural and intelligent interaction methods for head mounted displays (HMDs) has been an important goal in augmented reality for many years. Recently, small form factor eye tracking interfaces and wearable displays have become small enough to be used simultaneously and for extended periods of time. In this paper, we describe the combination of monocular HMDs and an eye tracking interface and show how they can be used to automatically reduce interaction requirements for displays with both single and multiple focal planes. We then present the results of preliminary and primary experiments which test the accuracy of eye tracking for a number of different displays such as Google Glass and Brother’s AiRScouter. Results show that our focal plane classification algorithm works with over 98 % accuracy for classifying the correct distance of virtual objects in our multi-focal plane display prototype and with over 90 % accuracy for classifying physical and virtual objects in commercial monocular displays. Additionally, we describe methodology for integrating our system into augmented reality applications and attentive interfaces.Attentive interfaceEye trackingHead mounted displayMixed realitySafetyThe Role of Focus in Advanced Visual InterfacesText/Journal Article1610-1987