Auflistung Künstliche Intelligenz 33(2) - Juni 2019 nach Erscheinungsdatum
1 - 10 von 15
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelA Semantic-Based Method for Teaching Industrial Robots New Tasks(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Ramirez-Amaro, Karinne; Dean-Leon, Emmanuel; Bergner, Florian; Cheng, GordonThis paper presents the results of the Artificial Intelligence (AI) method developed during the European project “Factory-in-a-day”. Advanced AI solutions, as the one proposed, allow a natural Human–Robot-collaboration, which is an important capability of robots in industrial warehouses. This new generation of robots is expected to work in heterogeneous production lines by efficiently interacting and collaborating with human co-workers in open and unstructured dynamic environments. For this, robots need to understand and recognize the demonstrations from different operators. Therefore, a flexible and modular process to program industrial robots has been developed based on semantic representations. This novel learning by demonstration method enables non-expert operators to program new tasks on industrial robots.
- ZeitschriftenartikelEpisodic Memories for Safety-Aware Robots(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Bartels, Georg; Beßler, Daniel; Beetz, MichaelIn the factories and distribution centers of the future, humans and robots shall work together in close proximity and even physically interact. This shift to joint human–robot teams raises the question of how to ensure worker safety. In this manuscript, we present a novel episodic memory system for safety-aware robots. Using this system, the robots can answer questions about their actions at the level of safety concepts. We built this system as an extension of the KnowRob framework and its notion of episodic memories. We evaluated the system in a safe physical human–robot interaction (pHRI) experiment, in which a robot had to sort surgical instruments while also ensuring the safety of its human co-workers. Our experimental results show the efficacy of the system to act as a robot’s belief state for online reasoning, as well as its ability to support offline safety analysis through its fast and flexible query interface. To this end, we demonstrate the system’s ability to reconstruct its geometric environment, course of action, and motion parameters from descriptions of safety-relevant events. We also show-case the system’s capability to conduct statistical analysis.
- ZeitschriftenartikelFrom Research to Market: Building the Perception Systems for the Next Generation of Industrial Robots(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Bartels, Georg; Beetz, Michael
- ZeitschriftenartikelPlug, Plan and Produce as Enabler for Easy Workcell Setup and Collaborative Robot Programming in Smart Factories(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Wojtynek, Michael; Steil, Jochen Jakob; Wrede, SebastianThe transformation of today’s manufacturing lines into truly adaptive systems facilitating individualized mass production requires new approaches for the efficient integration, configuration and control of robotics and automation components. Recently, various types of Plug-and-Produce architectures were proposed that support the discovery, integration and configuration of field devices, automation equipment or industrial robots during commissioning or even operation of manufacturing systems. However, in many of these approaches, the configuration possibilities are limited, which is a particular problem if robots operate in dynamic environments with constrained workspaces and exchangeable automation components as typically required for flexible manufacturing processes. In this article, we introduce an extended Plug-and-Produce concept based on dynamic motion planning, co-simulation and a collaborative human-robot interaction scheme that facilitates the quick adaptation of robotics behaviors in the context of a modular production system. To confirm our hypothesis on the efficiency and usability of this concept, we carried out a feasibility study where participants performed a flexible workcell setup. The results indicate that the assistance and features for planning effectively support the users in tasks of different complexity and that a quick adaption is indeed possible. Based on our observations, we identify further research challenges in the context of Plug, Plan and Produce applied to smart manufacturing.
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019)
- ZeitschriftenartikelSpecial Issue on Smart Production(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Ruskowski, Martin; Legler, Tatjana; Beetz, Michael; Bartels, Georg
- ZeitschriftenartikelCorrection to: A Jumpstart Framework for Semantically Enhanced OPC-UA(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Katti, Badarinath; Plociennik, Christiane; Schweitzer, MichaelThe original article can be found.
- ZeitschriftenartikelA Service-Based Production Ecosystem Architecture for Industrie 4.0(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Kuhn, Thomas; Sadikow, Siwara; Antonino, PabloChangeability is one major goal of Industrie 4.0. Existing production architectures limit changeability, because programmable logic controllers (PLC) that are responsible for the execution of real-time production steps also define the order of production steps that are executed for every product. PLC programming therefore implicitly defines the production process. Consequently, any change of a production process requires changes in PLC code, causes potential side effects due to unknown controller dependencies, and requires extensive testing. We propose a service-based architecture approach that encapsulates production steps into re-useable services. Production cells invoke services, and comparable to multi-agent systems autonomously decide about optimal service invocations based on shared information. In this article, we outline our service-based architecture concept and describe a use-case that illustrates the decentral organization of production systems and the cooperative optimization of production steps.
- ZeitschriftenartikelTowards Explainable Process Predictions for Industry 4.0 in the DFKI-Smart-Lego-Factory(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Rehse, Jana-Rebecca; Mehdiyev, Nijat; Fettke, PeterWith the advent of digitization on the shopfloor and the developments of Industry 4.0, companies are faced with opportunities and challenges alike. This can be illustrated by the example of AI-based process predictions, which can be valuable for real-time process management in a smart factory. However, to constructively collaborate with such a prediction, users need to establish confidence in its decisions. Explainable artificial intelligence (XAI) has emerged as a new research area to enable humans to understand, trust, and manage the AI they work with. In this contribution, we illustrate the opportunities and challenges of process predictions and XAI for Industry 4.0 with the DFKI-Smart-Lego-Factory. This fully automated factory prototype built out of LEGO $$^\circledR$$ ® bricks demonstrates the potentials of Industry 4.0 in an innovative, yet easily accessible way. It includes a showcase that predicts likely process outcomes and uses state-of-the-art XAI techniques to explain them to its workers and visitors.
- ZeitschriftenartikelVision-Based Solutions for Robotic Manipulation and Navigation Applied to Object Picking and Distribution(KI - Künstliche Intelligenz: Vol. 33, No. 2, 2019) Roa-Garzón, Máximo A.; Gambaro, Elena F.; Florek-Jasinska, Monika; Endres, Felix; Ruess, Felix; Schaller, Raphael; Emmerich, Christian; Muenster, Korbinian; Suppa, MichaelThis paper presents a robotic demonstrator for manipulation and distribution of objects. The demonstrator relies on robust 3D vision-based solutions for navigation, object detection and detection of graspable surfaces using the rc _ visard , a self-registering stereo vision sensor. Suitable software modules were developed for SLAM and for model-free suction gripping. The modules run onboard the sensor, which enables creating the presented demonstrator as a standalone application that does not require an additional host PC. The modules are interfaced with ROS, which allows a quick implementation of a fully functional robotic application.