computer vision health life sciences machine learning robotics
A comprehensive multimodal dataset capturing real-world caregiving routines from 21 occupational therapists performing 15 daily caregiving tasks. The dataset includes synchronized RGB-D video, tactile sensing, eye-gaze tracking, pose annotations, and action labels across 315 sessions totaling 19.8 hours of expert demonstrations. Data modalities include anonymized RGB images, depth maps, 44-sensor tactile readings, 2D/3D pose tracking, temporal action annotations, and first/third-person videos, enabling research in robot learning from demonstration, multimodal perception, and safe human-robot interaction for caregiving applications.
Static dataset - no regular updates planned
BSD-3-Clause license - Academic and non-commercial use permitted. See documentation for full terms.
https://emprise.cs.cornell.edu/robo-care/docs
EmPRISE Lab at Cornell University
See all datasets managed by EmPRISE Lab at Cornell University.
https://emprise.cs.cornell.edu/robo-care/
OpenRoboCare Multi-Modal Expert Demonstration Dataset for Robot-Assisted Caregiving was accessed on DATE from https://registry.opendata.aws/open-robo-care. Liang, X., Liu, Z., Lin, K., Gu, E., Ye, R., Nguyen, T., Hsu, C., Wu, Z., Yang, X., Cheung, C.S.Y., Soh, H., Dimitropoulou, K., & Bhattacharjee, T. (2025). OpenRoboCare: A Multimodal Multi-Task Expert Demonstration Dataset for Robot Caregiving. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
arn:aws:s3:::open-robo-careus-west-2aws s3 ls --no-sign-request s3://open-robo-care/