Logo des Repositoriums
 

Künstliche Intelligenz 35(1) - März 2021

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 11
  • Zeitschriftenartikel
    Prediction Error-Driven Memory Consolidation for Continual Learning: On the Case of Adaptive Greenhouse Models
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Schillaci, Guido; Schmidt, Uwe; Miranda, Luis
    This work presents an adaptive architecture that performs online learning and faces catastrophic forgetting issues by means of an episodic memory system and of prediction-error driven memory consolidation. In line with evidence from brain sciences, memories are retained depending on their congruence with the prior knowledge stored in the system. In this work, congruence is estimated in terms of prediction error resulting from a deep neural model. The proposed AI system is transferred onto an innovative application in the horticulture industry: the learning and transfer of greenhouse models. This work presents models trained on data recorded from research facilities and transferred to a production greenhouse.
  • Zeitschriftenartikel
    Assessing the Attitude Towards Artificial Intelligence: Introduction of a Short Measure in German, Chinese, and English Language
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Sindermann, Cornelia; Sha, Peng; Zhou, Min; Wernicke, Jennifer; Schmitt, Helena S.; Li, Mei; Sariyska, Rayna; Stavrou, Maria; Becker, Benjamin; Montag, Christian
    In the context of (digital) human–machine interaction, people are increasingly dealing with artificial intelligence in everyday life. Through this, we observe humans who embrace technological advances with a positive attitude. Others, however, are particularly sceptical and claim to foresee substantial problems arising from such uses of technology. The aim of the present study was to introduce a short measure to assess the Attitude Towards Artificial Intelligence (ATAI scale) in the German, Chinese, and English languages. Participants from Germany (N = 461; 345 females), China (N = 413; 145 females), and the UK (N = 84; 65 females) completed the ATAI scale, for which the factorial structure was tested and compared between the samples. Participants from Germany and China were additionally asked about their willingness to interact with/use self-driving cars, Siri, Alexa, the social robot Pepper, and the humanoid robot Erica, which are representatives of popular artificial intelligence products. The results showed that the five-item ATAI scale comprises two negatively associated factors assessing (1) acceptance and (2) fear of artificial intelligence. The factor structure was found to be similar across the German, Chinese, and UK samples. Additionally, the ATAI scale was validated, as the items on the willingness to use specific artificial intelligence products were positively associated with the ATAI Acceptance scale and negatively with the ATAI Fear scale, in both the German and Chinese samples. In conclusion we introduce a short, reliable, and valid measure on the attitude towards artificial intelligence in German, Chinese, and English language.
  • Zeitschriftenartikel
    Intelligent Behavior Depends on the Ecological Niche
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Eppe, Manfred; Oudeyer, Pierre-Yves
  • Zeitschriftenartikel
    Towards Strong AI
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Butz, Martin V.
    Strong AI—artificial intelligence that is in all respects at least as intelligent as humans—is still out of reach. Current AI lacks common sense, that is, it is not able to infer, understand, or explain the hidden processes, forces, and causes behind data. Main stream machine learning research on deep artificial neural networks (ANNs) may even be characterized as being behavioristic. In contrast, various sources of evidence from cognitive science suggest that human brains engage in the active development of compositional generative predictive models (CGPMs) from their self-generated sensorimotor experiences. Guided by evolutionarily-shaped inductive learning and information processing biases, they exhibit the tendency to organize the gathered experiences into event-predictive encodings. Meanwhile, they infer and optimize behavior and attention by means of both epistemic- and homeostasis-oriented drives. I argue that AI research should set a stronger focus on learning CGPMs of the hidden causes that lead to the registered observations. Endowed with suitable information-processing biases, AI may develop that will be able to explain the reality it is confronted with, reason about it, and find adaptive solutions, making it Strong AI. Seeing that such Strong AI can be equipped with a mental capacity and computational resources that exceed those of humans, the resulting system may have the potential to guide our knowledge, technology, and policies into sustainable directions. Clearly, though, Strong AI may also be used to manipulate us even more. Thus, it will be on us to put good, far-reaching and long-term, homeostasis-oriented purpose into these machines.
  • Zeitschriftenartikel
    Developmental Robotics and its Role Towards Artificial General Intelligence
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Eppe, Manfred; Wermter, Stefan; Hafner, Verena V.; Nagai, Yukie
  • Zeitschriftenartikel
    Sensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Nguyen, Phuong D. H.; Georgie, Yasmin Kim; Kayhan, Ezgi; Eppe, Manfred; Hafner, Verena Vanessa; Wermter, Stefan
    Safe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.
  • Zeitschriftenartikel
    Robots Learn Increasingly Complex Tasks with Intrinsic Motivation and Automatic Curriculum Learning
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Nguyen, Sao Mai; Duminy, Nicolas; Manoury, Alexandre; Duhaut, Dominique; Buche, Cedric
    Multi-task learning by robots poses the challenge of the domain knowledge: complexity of tasks, complexity of the actions required, relationship between tasks for transfer learning. We demonstrate that this domain knowledge can be learned to address the challenges in life-long learning. Specifically, the hierarchy between tasks of various complexities is key to infer a curriculum from simple to composite tasks. We propose a framework for robots to learn sequences of actions of unbounded complexity in order to achieve multiple control tasks of various complexity. Our hierarchical reinforcement learning framework, named SGIM-SAHT, offers a new direction of research, and tries to unify partial implementations on robot arms and mobile robots. We outline our contributions to enable robots to map multiple control tasks to sequences of actions: representations of task dependencies, an intrinsically motivated exploration to learn task hierarchies, and active imitation learning. While learning the hierarchy of tasks, it infers its curriculum by deciding which tasks to explore first, how to transfer knowledge, and when, how and whom to imitate.
  • Zeitschriftenartikel
    Goal-Directed Exploration for Learning Vowels and Syllables: A Computational Model of Speech Acquisition
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Philippsen, Anja
    Infants learn to speak rapidly during their first years of life, gradually improving from simple vowel-like sounds to larger consonant-vowel complexes. Learning to control their vocal tract in order to produce meaningful speech sounds is a complex process which requires to learn the relationship between motor and sensory processes. In this paper, a computational framework is proposed that models the problem of learning articulatory control for a physiologically plausible 3-D vocal tract model using a developmentally-inspired approach. The system babbles and explores efficiently in a low-dimensional space of goals that are relevant to the learner in its synthetic environment. The learning process is goal-directed and self-organized, and yields an inverse model of the mapping between sensory space and motor commands. This study provides a unified framework that can be used for learning static as well as dynamic motor representations. The successful learning of vowel and syllable sounds as well as the benefit of active and adaptive learning strategies are demonstrated. Categorical perception is found in the acquired models, suggesting that the framework has the potential to replicate phenomena of human speech acquisition.
  • Zeitschriftenartikel
    Unintended Nuclear War
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Bläsius, Karl-Hans; Siekmann, Jörg
    We want to use the 22nd of January 2021 as an opportunity to honor the “ Treaty on the Prohibition of Nuclear Weapons ”, TPNW, by this article, as the treaty will enter into force on this day.
  • Zeitschriftenartikel
    Build Back Better with Responsible AI
    (KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Kunze, Lars