Hand-Segmentation-in-the-Wild

View on GitHub

Analysis of Hand Segmentation in the Wild

Abstract

A large number of works in egocentric vision have concentrated on action and object recognition. Detection and segmentation of hands in first-person videos, however, has less been explored. For many applications in this domain, it is necessary to accurately segment not only hands of the camera wearer but also the hands of others with whom he is interacting. Here, we take an in-depth look at the hand segmentation problem. In the quest for robust hand segmentation methods, we evaluated the performance of the state of the art semantic segmentation methods, off the shelf and fine-tuned, on existing datasets. We fine-tune RefineNet, a leading semantic segmentation method, for hand segmentation and find that it does much better than the best contenders. Existing hand segmentation datasets are collected in the laboratory settings. To overcome this limitation, we contribute by collecting two new datasets: a) EgoYouTubeHands including egocentric videos containing hands in the wild, and b) HandOverFace to analyze the performance of our models in presence of similar appearance occlusions. We further explore whether conditional random fields can help refine generated hand segmentations. To demonstrate the benefit of accurate hand maps, we train a CNN for hand-based activity recognition and achieve higher accuracy when a CNN was trained using hand maps produced by the fine-tuned RefineNet. Finally, we annotate a subset of the EgoHands dataset for fine-grained action recognition and show that an accuracy of 58.6% can be achieved by just looking at a single hand pose which is much better than the chance level (12.5%).

Code

We have uploaded the additional files needed to train, test and evaluate our models’ performance. Code for multiscale evaluation is also provided. See the folder refinenet_files.

To test the models:

Models

You can download our refinenet-based hand segmentation models using the links given below:

Datasets

We used 4 hand segmentation datasets in our work, two of them(EgoYouTubeHands and HandOverFace datasets) are collected as part of our contribution:

Warning!

Thanks to Rafael Redondo Tejedor who pointed out some minor mistakes in the dataset:

NEW!

02/23/2021! Hands masks for GTEA (cropped till wrist) has been uploaded under [data directory.

Links to the videos used for EYTH dataset are given below. Each video is 3-6 minutes long. We cleaned the dataset before annotation and discarded unnecessary frames (e.g., frames containing text or if hands were out of view for a long time, etc).

vid4 vid6 vid9

NEW!

Test set for HandOverFace dataset is uploaded here.

Example images from EgoYouTubeHands dataset: EYTH

Example images from HandOverFace dataset: HOF

Results

Qualitative Results

Hand segmentation results for all datasets: All datasets:

CVPR Poster

cvpr-poster

License

We only provide the annotations for the videos in EYTH and images in HOF. The copyright to the videos belong to YouTube, and images are collected from internet. The license for our work is in the license.txt file.

Acknowledgements

We would like to thank undergraduate students Cristopher Matos, and Jose-Valentin Sera-Josef, and MS student Shiven Goyal for helping us in data annotations.

Citation

If this work and/or datasets is useful for your research, please cite our paper.

Questions?

Please contact ‘aishaurooj@gmail.com’