Auflistung nach Schlagwort "Semantic mapping"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelFrom Object Recognition to Activity Interpretation and Back, Based on Point Cloud Data(KI - Künstliche Intelligenz: Vol. 27, No. 2, 2013) Albrecht, Sven; Wiemann, Thomas; Hertzberg, Joachim; Guesgen, Hans W.; Marsland, StephenSemantic mapping of static environments has become a hot topic in robotics. The aim of the Mermaid project was to investigate the transfer of a sensor data interpretation approach for mapping to the problem of activity recognition in smart home applications such as elderly care. The basic structure of the semantic mapping approach, i.e., to assemble hypotheses of object aggregates in a closed-loop process of bottom-up raw data interpretation and top-down expectation generation from a domain ontology, can be extended to the temporal domain to include activity interpretation. This paper reports initial results, based on a study using point clouds from depth (RGB-D) sensor data.
- ZeitschriftenartikelSocRob@Home(KI - Künstliche Intelligenz: Vol. 33, No. 4, 2019) Lima, Pedro U.; Azevedo, Carlos; Brzozowska, Emilia; Cartucho, João; Dias, Tiago J.; Gonçalves, João; Kinarullathil, Mithun; Lawless, Guilherme; Lima, Oscar; Luz, Rute; Miraldo, Pedro; Piazza, Enrico; Silva, Miguel; Veiga, Tiago; Ventura, RodrigoThis paper describes the SocRob@Home robot system, consisting of a mobile robot (MBOT) equipped with several sensors and actuators, including a manipulator arm, and several software modules that provide the skills and capability to perform domestic tasks while interacting with humans in a domestic environment. We describe the whole system holistically, explaining how it integrates the contributing modules, and then we focus on the most relevant sub-systems, pointing out the original contributions of our research and development on the system in the last 5 years. The robot system includes metric and semantic mapping, several navigation modes (way-point navigation, person following and multi-sensor obstacle detection and avoidance), vision-based object detection, recognition, servoing and grasping, speech understanding, task planning and task execution. The robot system is mostly activated by speech commands from a human, and these commands, after being interpreted, are executed by the robot sub-systems, coordinated by a task executor. Lessons learned during the development and use of this system, which are useful as guidelines for the development of similar robot systems, are provided. MBOT’s performance is assessed using the task benchmarks scoring system of the European Robotics League competitions on Consumer Service robots.