Logo des Repositoriums
 

The Role of Focus in Advanced Visual Interfaces

dc.contributor.authorOrlosky, Jason
dc.contributor.authorToyama, Takumi
dc.contributor.authorSonntag, Daniel
dc.contributor.authorKiyokawa, Kiyoshi
dc.date.accessioned2018-01-08T09:23:21Z
dc.date.available2018-01-08T09:23:21Z
dc.date.issued2016
dc.description.abstractDeveloping more natural and intelligent interaction methods for head mounted displays (HMDs) has been an important goal in augmented reality for many years. Recently, small form factor eye tracking interfaces and wearable displays have become small enough to be used simultaneously and for extended periods of time. In this paper, we describe the combination of monocular HMDs and an eye tracking interface and show how they can be used to automatically reduce interaction requirements for displays with both single and multiple focal planes. We then present the results of preliminary and primary experiments which test the accuracy of eye tracking for a number of different displays such as Google Glass and Brother’s AiRScouter. Results show that our focal plane classification algorithm works with over 98 % accuracy for classifying the correct distance of virtual objects in our multi-focal plane display prototype and with over 90 % accuracy for classifying physical and virtual objects in commercial monocular displays. Additionally, we describe methodology for integrating our system into augmented reality applications and attentive interfaces.
dc.identifier.pissn1610-1987
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/11541
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 30, No. 0
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectAttentive interface
dc.subjectEye tracking
dc.subjectHead mounted display
dc.subjectMixed reality
dc.subjectSafety
dc.titleThe Role of Focus in Advanced Visual Interfaces
dc.typeText/Journal Article
gi.citation.endPage310
gi.citation.startPage301

Dateien