Home Abstract Video Paper Acknowledgement

Abstract

Humans have the amazing ability to perform very subtle manipulation task using a closed-loop control system with imprecise mechanics (i.e., our body parts) but rich sensory information (e.g., vision, tactile, etc.). In the closed-loop system, the ability to monitor the state of the task via rich sensory information is important but often less studied. In this work, we take liquid pouring as a concrete example and aim at learning to continuously monitor whether liquid pouring is successful (e.g., no spilling) or not via rich sensory inputs. We mimic humans’ rich sensories using synchronized observation from a chest-mounted camera and a wrist-mounted IMU sensor. Given many success and failure demonstrations of liquid pouring, we train a hierarchical LSTM with late fusion for monitoring. To improve the robustness of the system, we propose two auxiliary tasks during training: inferring (1) the initial state of containers and (2) forecasting the one-step future 3D trajectory of the hand with an adversarial training procedure. These tasks encourage our method to learn representation sensitive to container states and how objects are manipulated in 3D. With these novel components, our method achieves ~8% and ~11% better monitoring accuracy than the baseline method without auxiliary tasks on unseen containers and unseen users respectively.

Video

ECCV 2018

Liquid Pouring Monitoring via Rich Sensory Inputs

Tz-Ying Wu*, Juan-Ting Lin*, Tsun-Hsuang Wang, Chan-Wei Hu, Juan Carlos Niebles, Min Sun (*indicate equal contribution)
Paper (arXiv) Poster
@inproceedings{WuLinECCV18,
  title     = {Liquid Pouring Monitoring via Rich Sensory Inputs},
  author    = {Tz-Ying Wu and Juan-Ting Lin and Tsun-Hsuang Wang and Chan-Wei Hu and Juan Carlos Niebles and Min Sun},
  year      = {2018},
  booktitle = {European Conference on Computer Vision (ECCV)}
}

Acknowledgement