Auflistung nach Schlagwort "High-dimensional data"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelIdentification of Explainable Structures in Data with a Human-in-the-Loop(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Thrun, Michael C.Explainable AIs (XAIs) often do not provide relevant or understandable explanations for a domain-specific human-in-the-loop (HIL). In addition, internally used metrics have biases that might not match existing structures in the data. The habilitation thesis presents an alternative solution approach by deriving explanations from high dimensional structures in the data rather than from predetermined classifications. Typically, the detection of such density- or distance-based structures in data has so far entailed the challenges of choosing appropriate algorithms and their parameters, which adds a considerable amount of complex decision-making options for the HIL. Central steps of the solution approach are a parameter-free methodology for the estimation and visualization of probability density functions (PDFs); followed by a hypothesis for selecting an appropriate distance metric independent of the data context in combination with projection-based clustering (PBC). PBC allows for subsequent interactive identification of separable structures in the data. Hence, the HIL does not need deep knowledge of the underlying algorithms to identify structures in data. The complete data-driven XAI approach involving the HIL is based on a decision tree guided by distance-based structures in data (DSD). This data-driven XAI shows initial success in the application to multivariate time series and non-sequential high-dimensional data. It generates meaningful and relevant explanations that are evaluated by Grice’s maxims.
- ZeitschriftenartikelSlow Feature Analysis: Perspectives for Technical Applications of a Versatile Learning Algorithm(KI - Künstliche Intelligenz: Vol. 26, No. 4, 2012) Escalante-B., Alberto N.; Wiskott, LaurenzSlow Feature Analysis (SFA) is an unsupervised learning algorithm based on the slowness principle and has originally been developed to learn invariances in a model of the primate visual system. Although developed for computational neuroscience, SFA has turned out to be a versatile algorithm also for technical applications since it can be used for feature extraction, dimensionality reduction, and invariance learning. With minor adaptations SFA can also be applied to supervised learning problems such as classification and regression. In this work, we review several illustrative examples of possible applications including the estimation of driving forces, nonlinear blind source separation, traffic sign recognition, and face processing.