RoboCare and RoboCup

[Description :: People]

Description

Adi is a context-aware domestic robot developed by the RoboCare team at the ISTC-CNR. The development team is the result of a combined development effort stemming from two partners of the RoboCare project, namely PST (coordinated by Amedeo Cesta of ISTC-CNR) and SPQR (coordinated by Daniele Nardi of DIS-Uniroma1). The robot is aimed at demonstrating the feasibility of a "robotically rich" environment for supporting elderly people. Specifically, Adi is part of a multi-agent system composed of sensors and software agents whose overall purpose is to

  • predicting/preventing possibly hazardous behavior
  • monitoring the adherence to behavioral constraints defined by a caregiver
  • providing basic services for user interaction

The system, which is deployed in a mock-up domestic environment at the RoboCare laboratory in Rome, was partially re-created in the RoboCup@Home domestic environment during the RoboCup 2006 competition in Bremen.

                         
                         

Adi the Robotic Mediator. Adi was built to explore the added value of an "embodied" companion in an intelligent home. The robot's mobility also provides the basis for developing added-value services which require physical presence. Specifically, the robot is capable of carrying out topological path planning, and employs reactive navigation for obstacle avoidance and scan-matching for robust localization. Adi is also endowed with verbal user interaction skills: speech recognition is achieved with the Sonic speech recognition system (University of Colorado), while speech synthesis occurs through the Lucia talking head developed at ISTC-CNR-Padua.

                         

Sensory Subsystem. A Stereo-vision based People Localization and Tracking service (PLT) provides the means to locate the assisted person. This environmental sensor was deployed in Bremen in the form of an "intelligent coat-hanger", demonstrating easy setup and general applicability of vision-based systems for in-door applications. The system is scalable as multiple cameras can be used to improve area coverage and precision. A height-map is used for planview segmentation and planview tracking. Color-based person model keeps track of different people and distinguishes people from (moving) objects, e.g., the domestic robot. In addition, vision-based Posture Recognition (PR) can be cascaded to the PLT computation in order to provide further information on what the assisted person is doing.

                         

Multi-Agent Coordination Infrastructure. Coordination of multiple services is achieved by solving a Multi-Agent Coordination (MAC) problem. The MAC is cast as a distributed constraint optimization problem, and solved by ADOPT (Asynchronous Distributed Optimization algorithm). An interesting challenge which arises from this application of distributed constraint reasoning techniques is to develop high-level, domain-specific formalisms for MAC scenarios.


                         

Monitoring Activities of Daily Living. Continuous feedback from the sensors allows to build a symbolic representation of the state of the environment and of the assisted elder. This information is employed by a CSP-based schedule execution monitoring tool (O-Oscar) to follow the occurrence of Activities of Daily Living (ADLs). Aspects of daily life to be monitored are specified by a caregiver in the form of complex temporal constraints among activities. Constraint violations lead to system intervention (e.g., Adi suggests "how about having lunch?", or warns "don't take your medication on an empty stomach!"). Given the high level of uncertainty on the expected behavior of the assisted person, domestic supervision represents a novel benchmark for complex schedule monitoring.



People

Core Development Team: G. Riccardo Leone (DIS-Uniroma1 & ISTC-CNR), Federico Pecora (ISTC-CNR), Riccardo Rasconi (ISTC-CNR)

Contributors: Luca Iocchi (DIS-Uniroma1), Daniele Calisi (DIS-Uniroma1), Carsten Weber (Univ. of Erlangen-Nuremberg), Marco Zaratti (DIS-Uniroma1), Amedeo Cesta (ISTC-CNR), Daniele Nardi (DIS-Uniroma1)

Acknowledgments: Special thanks go to Piero Cosi (ISTC-CNR-Padua) for providing and assisting in the use of the Lucia Talking Head

Contact: Amedeo Cesta (ISTC-CNR), RoboCare Coordinator
             amedeo.cesta@istc.cnr.it, Tel: +39-06-44595-320, Fax: +39-06-44595-243
             --
             Institute for Cognitive Science and Technology
             Italian National Research Council
             Via San Martino della Battaglia, 44
             I-00185 Rome
             Italy