HERCULES aims at:
The main scientific assumption is that the biomechanical and psychosocial costs of workers’ gestures are related and can be understood, via ergonomic indicators (EI), and modeled in a unique cost function to be minimized by a collaborative robot (cobot).
Thanks to its active power, the cobot assistance should objectively reduce workers’ physical strain, while maintaining the task performance requirements. Yet, reducing the mechanical load on the worker is not the only criterion to be accounted for, in view of long term cobot acceptance. Indeed, the cobot should also account for workers’ psycho-social factors, such as the worker’s perception of the importance of his/her role in performing the task, the level of precision and flexibility allowed by the robot or the boredom of a repetitive task. For example, Schoose et al. (2022) have shown that, while reducing physical strain, full weight compensation may reduce cobot acceptance. Workers often prefer to continue realizing gestures that they feel are important/valorising for them (Bobillier-Chaumon, 2021).
To achieve these goals, HERCULES will address the scientific challenges associated with
We will rely on HERCULES interdisciplinary consortium to design the cost function that will be minimized by the cobot controller and used in general to assess industrial tasks. Indeed, the cobot will assist the worker to minimize worker-specific biomechanical EI that will be weighted according to their socio-psychological importance. HERCULES will also address the technological challenges of the automatic segmentation of an industrial task into subtasks and the accurate estimate of the worker’s kinodynamics state, from affordable and minimally invasive multimodal sensors and on a capacitive skin placed on the cobot and acquired via the PIA TIRREX. Industrial tasks are composed of several subtasks, each needing a different assistive mode. Within HERCULES, we will segment in real-time and with high robustness, each manufacturing task in subtasks, by combining kinodynamic measures, a human biomechanical model and a bidirectional long short term memory deep network (Kumar Sing, 2022). The kinodynamic measures should be as accurate as those output by gold standard systems. To this end, we will fuse raw data from Inertial Measurement Unit (IMU), RGB-D camera(s) with skeleton tracking algorithms, and data from a new capacitive skin mounted on the cobot BAZAR (Cherubini, 2019). Sensor fusion will rely on a new optimization framework that will cope with the advantages and drawbacks of each sensor (Futamure, 2016, Mallat, 2021).