Auflistung nach:
Auflistung KI - German Conference on Artificial Intelligence nach Titel
1 - 10 von 13
Treffer pro Seite
Sortieroptionen
- TextdokumentAdapting Natural Language Processing Strategies for Stock Price Prediction(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Voigt, FredericDue to the parallels between Natural Language Processing (NLP) and stock price prediction (SPP) as a time series problem, an attempt is made to interpret SPP as an NLP problem. As adaptable techniques word vector representations, pre-trained language models, advanced recurrent neural networks, unsupervised learning methods, and multimodal methods are introduced and it is outlined how they can be transferred into the stock prediction domain.
- TextdokumentAutomatic German Easy Language (Leichte Sprache) Simplification: Data, Requirements and Approaches(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Schomacker, ThorbenWith the rise of the internet, it has become convenient and often free to access an abundance of texts. However, not all people, who have access, can really read and understand the texts. Despite the fact that, they speak the language that the text is written in. Most often this problem originates in the too complex nature of the texts. Text Simplification can help to overcome this barrier. In my dissertation, I want to specially focus on Leichte Sprache (German Easy Language). Which is a simplified version of German, that is tailored to the needs of people with cognitive disabilities.
- TextdokumentContinuous Image Classification on Data Streams using Contrastive Learning and Cluster Analysis(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Schliebitz, AndreasThis PhD proposal presents a concept for an AI-based computer vision system trained on image data streams for real-time classification of unlabeled objects. The required AI models are trained through an underexplored combination of self-supervised and incremental learning. Emphasis will be placed on contrastive learning using the SimCLR framework and its successors. The need for such a system is motivated by the observation that supervised learning approaches often require large labeled datasets. The labeling process, which is usually performed manually, is not only time-consuming but inherently prone to errors. For sufficiently large image data streams, timely labeling of samples becomes impossible leading to sporadic data annotation cycles and the capture of only temporarily representative features. Such an approach might also render the resulting classifier vulnerable to domain shift and concept drift. The image data stream used in this proposal consists of unlabeled color images of clean potatoes, which are to be sorted into several defect classes by a self-supervised classifier. Contrastive transfer learning is performed on this image data stream for the selection of a feature extractor. In this approach, different pre-trained backbone architectures are adapted and evaluated using the SimCLR framework. The classifiers are evaluated based on their generated feature vectors using cluster analysis. This involves searching for novel evaluation methods that do not require labels and are more suitable for judging model performance than existing methods. Furthermore, by clustering the feature vectors, an automatic and adaptive classification might be achievable without the use of labels. In a subsequent step, the self-supervised classifiers are continuously improved using incremental learning methods. For this the models are incrementally trained on the image data stream over a longer period of time. Potential adjustments to the data stream could increase the classifier’s accuracy as well as make it more robust to domain adaptation problems. A final validation of the incrementally self-learning classification system can be performed with smaller, manually annotated datasets.
- TextdokumentEvaluating Dangerous Capabilities of Large Language Models: An Examination of Situational Awareness(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Yadav, DipendraThe focal point of this research proposal pertains to a thorough examination of the inherent risks and potential challenges associated with the use of Large Language Models (LLMs). Emphasis has been laid on the facet of situational awareness, an attribute signifying a model’s understanding of its environment, its own state, and the implications of its actions. The proposed research aims to design a robust and reliable metric system and a methodology to gauge situational awareness, followed by an in-depth analysis of major LLMs using this developed metric. The intention is to pinpoint any latent hazards and suggest effective strategies to mitigate these issues, with the ultimate goal of promoting the responsible and secure advancement of artificial intelligence technologies.
- TextdokumentExploring Adversarial Transferability in Real-World Scenarios: Understanding and Mitigating Security Risks(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Shrestha, AbhishekDeep Neural Networks (DNNs) are known to be vulnerable to artificially generated samples known as adversarial examples. Such adversarial samples aim at generating misclassifications by specifically optimizing input data for a matching perturbation. Interestingly, it can be observed that these adversarial examples are transferable from the source network where they were created to a black-box target network. The transferability property means that attackers are no longer required to have white-box access to models nor bound to query the target model repeatedly to craft an effective attack. Given the rising popularity of the use of DNNs in various domains, it is crucial to understand the vulnerability of these networks to such attacks. In this premise, the thesis intends to study transferability under a more realistic scenario, where source and target models can differ in various aspects like accuracy, capacity, bitwidth, and architecture among others. Furthermore, the goal is to also investigate defensive strategies that can be utilized to minimize the effectiveness of these attacks.
- TextdokumentLearning the Generation of Balanced Game Levels(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Rupp, FlorianGames, including board games and video games, are widely popular and serve as a platform for entertainment while also challenging various cognitive abilities. A game, especially if it’s a competitive multiplayer setting, needs to be balanced in order to provide a joyful experience for all players. The balancing process of such game levels, however, requires a lot of work and manual testing. To address this shortcoming, this thesis aims to implement methods to automatically generate balanced game levels. Therefore, four research questions with ideas for problem-solving approaches are presented: (1) the development and evaluation of metrics to measure the balancing state of a game, (2) research on methods for learning the procedural generation of balanced levels, (3) balance levels for players with different strategies and, (4) examine how findings can be applied to other research areas. Methods from the field of procedural content generation, especially in combination with machine learning methods, are promising to answer these questions. In a first paper, I already introduced a reinforcement learning based approach to create balanced levels for two players.
- TextdokumentLightweight Federated Learning Based Detection of Malicious Activity in Distributed Networks(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Wöhnert, Kai HendrikIn an increasingly complex cyber threat landscape, traditional malware detection methods often fall short, particularly within resource-limited distributed networks like smart grids. This research project aims to develop an efficient malware detection system for such distributed networks, focusing on three elements: feature extraction, feature selection, and classification. For classification, a lightweight and accurate machine-learning model needs to be developed.
- TextdokumentProceedings of Doctoral Consortium at KI 2023 – Preface(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Stolzenburg, Frieder
- TextdokumentSelf-Supervised Learning of Speech Representation via Redundancy Reduction(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Brima, YusufOur proposed research aims to contribute to the field of SSL for speech processing by developing representations that effectively capture latent speaker statistics. A comprehensive evaluation in various downstream tasks will provide a thorough assessment of the representations’ suitability and performance. The outcomes of this research will advance our understanding and utilization of SSL in speech representation learning, ultimately enhancing speaker-related applications and their practical implications.
- TextdokumentStudents’ Acceptance of Explainable, AI-based Learning Path Recommendations in an Adaptive Learning System(DC@KI2023: Proceedings of Doctoral Consortium at KI 2023, 2023) Normann, MarcThe field of AI often provides opaque methods and algorithms, which are based on personal data. Therefore, challenges could arise in the scope of acceptance and trust within an adaptive learning environment, that provides learning recommendations based on students’ behavior with learning content. To prevent the refusal of such systems by its user and to ensure a fair usage in the educational sector, this research project aims at identifying the most important points and conditions for the acceptance of adaptive learning systems. The contribution of the PhD project will be at the intersection between engineering and human-machine-interfaces with a focus on social science. A focus is on a trustworthy and responsible handling of educational AI systems. The role of explainable AI methods in relationship to trust and acceptance should be studied and consequential changes in students’ motivation could be examined.