Researchers have made a breakthrough by developing a new robot capable of replicating care workers’ two-handed techniques for dressing individuals. Historically, robots designed to assist with dressing, aimed at supporting elderly individuals or those with disabilities, were constructed with only one arm. However, feedback indicated that such designs could lead to discomfort or impracticality for users.
Addressing this issue, Dr Jihong Zhu, affiliated with the University of York’s Institute for Safe Autonomy, introduced an innovative two-armed assistive dressing approach. This concept, which diverges from prior research, draws inspiration from caregivers’ methods, highlighting that specific manoeuvres are crucial to minimize discomfort and distress for the care recipient.
Implementing this technology can revolutionize the social care sector by enabling care workers to allocate more time to the health and mental well-being of those under their care rather than focusing solely on physical tasks.
Dr Zhu and his team embarked on a meticulous study to understand the movement patterns of care workers during the dressing process. By allowing a robot to observe and learn from these human actions and subsequently applying AI to create a model that replicates these tasks, they were able to underscore the necessity of using both hands for dressing, as opposed to one. This research also highlights the importance of arm angles and the occasional need for human intervention to adjust or halt specific robot movements.
Dr Zhu, representing the Institute for Safe Autonomy and the School of Physics, Engineering, and Technology at the University of York, expressed that practical tasks like dressing could be automated through robotics. That would free care workers to focus more on fostering companionship and monitoring the overall well-being of those in their care. He emphasized the importance of understanding real-time care worker tasks to apply this technology outside the laboratory setting successfully.
The research team utilized a method known as ‘learning from demonstration,’ which bypasses the need for expert programming. Instead, a robot learns by observing the required human motion, understanding that care workers use both arms to effectively attend to individuals with varying abilities. This dual-arm approach enables one hand to guide the individual’s hand through a garment. At the same time, the other adjusts the garment itself, overcoming the limitations of previous one-armed designs that demanded too much effort from patients.
Moreover, the team developed algorithms that allow the robotic arms to perform actions flexibly, such as pulling and lifting, while also stopping or altering movements based on a gentle human touch or guidance, without resistance from the robot.
Dr Zhu highlighted the significance of human modelling in enhancing the efficiency and safety of interactions between humans and robots. He stressed that beyond performing tasks, the robot must be able to pause or modify its actions mid-task, should the user wish so. Building trust in this human-robot interaction is paramount, with future research focused on testing the robot’s safety boundaries and its acceptance by those who most benefit from it.
More information: Jihong Zhu et al, Do You Need a Hand? – A Bimanual Robotic Dressing Assistance Scheme, IEEE Transactions on Robotics. DOI: 10.1109/TRO.2024.3366008
Journal information: IEEE Transactions on Robotics Provided by University of York
