Auflistung nach Schlagwort "Compositionality"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCompositional Analyses of Highly-Configurable Systems with Feature-Model Interfaces(Software Engineering 2017, 2017) Schröter, Reimar; Krieter, Sebastian; Thüm, Thomas; Benduhn, Fabian; Saake, GunterToday’s software systems are often customizable by means of load-time or compile-time configuration options. These options are typically not independent and their dependencies can be specified by means of feature models. As many industrial systems contain thousands of options, the maintenance and utilization of feature models is a challenge for all stakeholders. In the last two decades, numerous approaches have been presented to support stakeholders in analyzing feature models. Such analyses are commonly reduced to satisfiability problems, which suffer from the growing number of options. While first attempts have been made to decompose feature models into smaller parts, they still require to compose all parts for analyses. We proposed the concept of a feature-model interface that only consists of a subset of features and hides all other features and dependencies. Based on a formalization of feature-model interfaces, we proved compositionality properties. We evaluated feature-model interfaces using a three-month history of an industrial fea- ture model with 18,616 features. Our results indicate performance benefits especially under evolution as often only parts of the feature model need to be analyzed again.
- ZeitschriftenartikelTowards Strong AI(KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Butz, Martin V.Strong AI—artificial intelligence that is in all respects at least as intelligent as humans—is still out of reach. Current AI lacks common sense, that is, it is not able to infer, understand, or explain the hidden processes, forces, and causes behind data. Main stream machine learning research on deep artificial neural networks (ANNs) may even be characterized as being behavioristic. In contrast, various sources of evidence from cognitive science suggest that human brains engage in the active development of compositional generative predictive models (CGPMs) from their self-generated sensorimotor experiences. Guided by evolutionarily-shaped inductive learning and information processing biases, they exhibit the tendency to organize the gathered experiences into event-predictive encodings. Meanwhile, they infer and optimize behavior and attention by means of both epistemic- and homeostasis-oriented drives. I argue that AI research should set a stronger focus on learning CGPMs of the hidden causes that lead to the registered observations. Endowed with suitable information-processing biases, AI may develop that will be able to explain the reality it is confronted with, reason about it, and find adaptive solutions, making it Strong AI. Seeing that such Strong AI can be equipped with a mental capacity and computational resources that exceed those of humans, the resulting system may have the potential to guide our knowledge, technology, and policies into sustainable directions. Clearly, though, Strong AI may also be used to manipulate us even more. Thus, it will be on us to put good, far-reaching and long-term, homeostasis-oriented purpose into these machines.