Vision- and tactile-based continuous multimodal intention and attention recognition for safer physical human-robot interaction

06/22/2022
by   Christopher Yee Wong, et al.
0

Employing skin-like tactile sensors on robots enhances both the safety and usability of collaborative robots by adding the capability to detect human contact. Unfortunately, simple binary tactile sensors alone cannot determine the context of the human contact – whether it is a deliberate interaction or an unintended collision that requires safety manoeuvres. Many published methods classify discrete interactions using more advanced tactile sensors or by analysing joint torques. Instead, we propose to augment the intention recognition capabilities of simple binary tactile sensors by adding a robot-mounted camera for human posture analysis. Different interaction characteristics, including touch location, human pose, and gaze direction, are used to train a supervised machine learning algorithm to classify whether a touch is intentional or not with 92 intention recognition is significantly more accurate than monomodal analysis with the collaborative robot Baxter. Furthermore, our method can also continuously monitor interactions that fluidly change between intentional or unintentional by gauging the user's attention through gaze. If a user stops paying attention mid-task, the proposed intention and attention recognition algorithm can activate safety features to prevent unsafe interactions. In addition, the proposed method is robot and touch sensor layout agnostic and is complementary with other methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset