Auflistung nach Schlagwort "computer vision"
1 - 10 von 17
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragBathing in lightness: an interactive light and sound installation(Mensch und Computer 2020 - Tagungsband, 2020) Stimberg, Simon; Brennecke, AngelaBathing in Lightness is an interactive light and sound installation that seems to be enlivened by a swarm entity trying to explore its inner world and communicate with the outer one. Consisting of 52 filament light bulbs it visualizes the movement of a particle swarm that is driven by the presence of the viewer and its own inner urge. Visitors can interact with the installation by moving in front of it while their movement is followed by the swarm and thus being translated into light and sound – visible inside the cluster of light bulbs and audible via nearby speakers or headphones.
- ZeitschriftenartikelBest low-cost methods for real-time detection of the eye and gaze tracking(i-com: Vol. 23, No. 1, 2024) Khaleel, Amal Hameed; Abbas, Thekra H.; Ibrahim, Abdul-Wahab SamiThe study of gaze tracking is a significant research area in computer vision. It focuses on real-world applications and the interface between humans and computers. Recently, new eye-tracking applications have boosted the need for low-cost methods. The eye region is a crucial aspect of tracking the direction of the gaze. In this paper, several new methods have been proposed for eye-tracking by using methods to determine the eye area as well as find the direction of gaze. Unmodified webcams can be used for eye-tracking without the need for specialized equipment or software. Two methods for determining the eye region were used: facial landmarks or the Haar cascade technique. Moreover, the direct method, based on the convolutional neural network model, and the engineering method, based on distances determining the iris region, were used to determine the eye’s direction. The paper uses two engineering techniques: drawing perpendicular lines on the iris region to identify the gaze direction junction point and dividing the eye region into five regions, with the blackest region representing the gaze direction. The proposed network model has proven effective in determining the eye’s gaze direction within limited mobility, while engineering methods improve their effectiveness in wide mobility.
- TextdokumentChameleon: A Semi-AutoML framework targeting quick and scalable development and deployment of production-ready ML systems for SMEs(INFORMATIK 2021, 2021) Otterbach, Johannes; Wollmann, ThomasDeveloping, scaling, and deploying modern Machine Learning solutions remains challenging for small- and middle-sized enterprises (SMEs). This is due to a high entry barrier of building and maintaining a dedicated IT team as well as the difficulties of real-world data (RWD) compared to standard benchmark data. To address this challenge, we discuss the implementation and concepts of Chameleon, a semi-AutoML framework. The goal of Chameleon is fast and scalable development and deployment of production-ready machine learning systems into the workflow of SMEs. We first discuss the RWD challenges faced by SMEs. After, we outline the central part of the framework which is a model and loss-function zoo with RWD-relevant defaults. Subsequently, we present how one can use a templatable framework in order to automate the experiment iteration cycle, as well as close the gap between development and deployment. Finally, we touch on our testing framework component allowing us to investigate common model failure modes and support best practices of model deployment governance.
- KonferenzbeitragConceptualizing a holistic smart dairy farming system(43. GIL-Jahrestagung, Resiliente Agri-Food-Systeme, 2023) Gravemeier, Laura Sophie; Dittmer, Anke; Jakob, Martina; Kümper, Daniel; Thomas, OliverWith the increasing use of sensor technology and the resulting diverse data streams in dairy farming, the potential for the use of AI rises. Beyond the AI-based solution of individual problems, a holistic approach to smart dairy farming is necessary. In this contribution, we identify and analyse a set of diverse use cases for smart dairy farming: lying behaviour analysis, heat stress monitoring, work diary, barn and herd monitoring, and animal health tracking. These focus both on animal health and welfare as well as assistance for farmers. Based on the requirements of these use cases, we design a holistic smart dairy farming system in an iterative development process.
- KonferenzbeitragDemonstrating ScreenshotMatcher: Taking Smartphone Photos to Capture Screenshots(Mensch und Computer 2021 - Tagungsband, 2021) Schmid, Andreas; Fischer, Thomas; Weichart, Alexander; Hartmann, Alexander; Wimmer, RaphaelTaking screenshots is a common way of capturing screen content to share it with others or save it for later. Even though all major desktop operating systems come with a screenshot function, a lot of people also use smartphone cameras to photograph screen contents instead. While users see this method as faster and more convenient, image quality is significantly lower. This paper is a demonstration of ScreenshotMatcher, a system that allows for capturing a highfidelity screenshot by taking a smartphone photo of (part of) the screen. A smartphone application sends a photo of the screen region of interest to a program running on the PC which retrieves the corresponding screen region with a feature matching algorithm. The result is sent back to the smartphone. As phone and PC communicate via WiFi, ScreenshotMatcher can also be used together with any PC in the same network running the application – for example to capture screenshots from a colleague’s PC. Released as open-source code, ScreenshotMatcher may be used as a basis for applications and research prototypes that bridge the gap between PC and smartphone.
- KonferenzbeitragDetection rate and spraying accuracy of Ecorobotix ARA(42. GIL-Jahrestagung, Künstliche Intelligenz in der Agrar- und Ernährungswirtschaft, 2022) Anken, Thomas; Latsch, AnnettMachine Learning enabled the long hoped-for breakthrough in the field of automated single-plant weed control. Ecorobotix ARA (Ecorobotix, Yverdon, Switzerland) was the first commercially available spot-sprayer allowing automated single-plant detection and control of broad-leaved dock (Rumex obtusifolius) in meadows. Cameras are used to record the vegetation and machine learning-based algorithms detect the plants in real time. This makes it possible to selectively treat only the target plants. The aim of the present research was to investigate the accuracy of the detection and spraying of the plants in comparison to manual treatment with a knapsack sprayer. With a detection rate of over 85 % in most cases and a slightly better spraying accuracy compared to manual treatment, this first spot sprayer for meadows showed a good performance in practical use.
- TextdokumentEvaluation of CNN-based algorithms for human pose analysis of persons in red carpet scenarios(INFORMATIK 2017, 2017) Kowerko, Danny; Richter, Daniel; Heinzig, Manuel; Kahl, Stefan; Helmert, Stefan; Brunnett, GuidoWe evaluate two CNN-based algorithms for keypoint-based human pose analysis on two image test sets containing red carpet scenarios, one taken under controlled conditions in a TV studio environment and another more heterogeneous data set taken from FlickR without any restriction but to contain a red carpet. We focus on the pose of persons standing directly on the red carpet. A web application is presented allowing collaborative work to confirm or modify already pre-localised body keypoints given from the method presented in [Ca17]. These annotations helped to quickly define ground truth for the subsequent evaluation of several hundreds of persons standing on a red carpet. An own evaluation formalism is presented that adopts to the size of the respective keypoints. The TV studio data set includes coarsely defined body and head poses. Using the angular information, we are able to quantitatively define the optimum head pose angle range and limitations of facial keypoint determination.
- KonferenzbeitragImage-based activity monitoring of pigs(44. GIL - Jahrestagung, Biodiversität fördern durch digitale Landwirtschaft, 2024) Jan-Hendrik Witte, Jorge Marx GómezIn modern pig livestock farming, animal well-being is of paramount importance. Monitoring activity is crucial for early detection of potential health or behavioral anomalies. Traditional object tracking methods such as DeepSort often falter due to the pigs' similar appearances, frequent overlaps, and close-proximity movements, making consistent long-term tracking challenging. To address this, our study presents a novel methodology that eliminates the need for conventional tracking to capture activity on pen-level. Instead, we segment video frames into predefined sectors, where pig postures are determined using YOLOv8 for pig detection and EfficientNetV2 for posture classification. Activity levels are then assessed by comparing sector counts between consecutive frames. Preliminary results indicate discernible variations in pig activity throughout the day, highlighting the efficacy of our method in capturing activity patterns. While promising, this approach remains a proof of concept, and its practical implications for real-world agricultural settings warrant further investigation.
- KonferenzbeitragThe Omniscope - Multimedia Streaming and Computer Vision for Applications in the Virtuality Continuum(SKILL 2018 - Studierendenkonferenz Informatik, 2018) Melles, GeraldResearching applications within the Virtuality Continuum (VC) is a process involving combinations of many different technologies. Media streaming and computer vision in particular are important aspects of many VC applications. This paper introduces the Omniscope library as a way to integrate both in an efficient, user-friendly and extensible manner. It achieves this by combining GStreamer and OpenCV in a C/C++ library as well as a plugin for the Unity IDE.
- TextdokumentA pipeline for analysing image and video material in a forensic context with intelligent systems(INFORMATIK 2022, 2022) Preuß,Svenja; Labudde,DirkShows like CSI seem to convey a certain view on the capabilities of forensic science based on the vast progress of digitalisation and the new technology, that goes along with it. But those depictions can be misleading and hardly represent a realistic reflection of reality. Nevertheless, this representation influences the public view on digital forensic analysis. This phenomenon is also known as the CSI effect. To present a more realistic view on practices in digital forensics, we want to introduce typical image and video analysis methods used in tackling real life forensic challenges and point to their capabilities as well as their limits. In this context an important area is image and video enhancement. With methods such as Super Resolution, images can be scaled up, their corresponding resolution gets enhanced and image noise can be reduced. During the subsequent image analysis, methods ranging from purely cognitive analysis to extraction of raw pixel values and various semantic information from the image or video, utilizing AI frameworks, are used. This allows for example to detect and analyse faces up to whole people, as well as objects in images or videos.