Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots

The perception of self, that is, the capability to detect its very own body and distinguish it from the qualifications, is advantageous each for self-centered steps and interaction with other agents. Full spatial information of the hand need to be recognised to complete difficult duties, this kind of as item greedy. There, simple methods this kind of as 2d hand keypoints are not adequate.

Robonaut. Picture credit score NASA through Pixabay

Hence, a recent paper proposes to use hand segmentation for visible self-recognition. All the pixels belonging to a true robotic hand are segmented utilizing RGB photographs from the robotic cameras.

The approach takes advantage of convolutional neural networks trained with completely simulated facts. It as a result solves the absence of pre-existing training datasets. In buy to match the product to the precise domain, the pre-trained weights and the hyperparameters are good-tuned. The proposed remedy achieves an intersection over union accuracy improved than the point out-of-the-artwork.

The capability to distinguish in between the self and the qualifications is of paramount great importance for robotic duties. The distinct circumstance of fingers, as the close effectors of a robotic method that more often enter into call with other elements of the atmosphere, need to be perceived and tracked with precision to execute the meant duties with dexterity and with no colliding with road blocks. They are essential for numerous applications, from Human-Robot Conversation duties to item manipulation. Present day humanoid robots are characterised by high range of levels of freedom which can make their ahead kinematics types very delicate to uncertainty. Hence, resorting to eyesight sensing can be the only remedy to endow these robots with a good perception of the self, staying ready to localize their body elements with precision. In this paper, we propose the use of a Convolution Neural Network (CNN) to segment the robotic hand from an graphic in an selfish check out. It is recognised that CNNs demand a substantial sum of facts to be trained. To conquer the challenge of labeling true-environment photographs, we propose the use of simulated datasets exploiting domain randomization approaches. We good-tuned the Mask-RCNN network for the precise process of segmenting the hand of the humanoid robotic Vizzy. We target our focus on developing a methodology that needs minimal amounts of facts to attain affordable performance though providing detailed insight on how to effectively deliver variability in the training dataset. Moreover, we analyze the good-tuning procedure within just the elaborate product of Mask-RCNN, understanding which weights should be transferred to the new process of segmenting robotic fingers. Our closing product was trained solely on synthetic photographs and achieves an common IoU of eighty two% on synthetic validation facts and 56.3% on true exam facts. These results were attained with only 1000 training photographs and 3 hrs of training time utilizing a one GPU.

Exploration paper: Almeida, A., Vicente, P., and Bernardino, A., “Where is my hand? Deep hand segmentation for visible self-recognition in humanoid robots”, 2021. Backlink: https://arxiv.org/abs/2102.04750


Maria J. Danford

Next Post

Artificial intelligence for dementia prevention

Tue Feb 16 , 2021
AI-Mind is a 5-year project funded by Horizon 2020, with the intention of facilitating a paradigm shift in scientific practice of gentle cognitive impairment. A team of Aalto College and HUS Helsinki College Clinic researchers are concerned in the project. AI-Mind will develop clever electronic resources for screening of brain […]

You May Like