Determining Exception Context in Assembly Operations from Multimodal Data
Determining Exception Context in Assembly Operations from Multimodal Data
Blog Article
Robot assembly tasks can fail due to unpredictable errors and can only continue with the manual intervention of a human operator.Recently, we proposed an exception strategy learning framework based on statistical learning and context determination, which can successfully resolve such situations.This paper madelaine chocolate advent calendars deals with context determination from multimodal data, which is the key component of our framework.We propose a novel approach to generate unified low-dimensional context descriptions based on image and force-torque data.For this purpose, we combine a state-of-the-art neural mac miller kids poster network model for image segmentation and contact point estimation using force-torque measurements.
An ensemble of decision trees is used to combine features from the two modalities.To validate the proposed approach, we have collected datasets of deliberately induced insertion failures both for the classic peg-in-hole insertion task and for an industrially relevant task of car starter assembly.We demonstrate that the proposed approach generates reliable low-dimensional descriptors, suitable as queries necessary in statistical learning.