AVOT: Audio-Visual Object Tracking of Multiple Objects for Robotics

Justin Wilson1 and Ming C. Lin1,2

1University of North Carolina at Chapel Hill, 2University of Maryland, College Park




(Left) Audio-Visual Object Tracker (AVOT) neural network architecture. AVOT is a feed-forward convolutional neural network that classifies and scales a fixed number of anchor bounding boxes to track objects in a video. Here, we define an object based on its geometry and material. Convolutional layers from the visual and audio inputs are fused using an add merge layer before being input into a base network of convolutional layers similar to standard classification networks. The base is then followed by predictor layers for detection, as is done in SSD, however designed and optimized for our audio-visual dataset and task. The single best detection for each object is then selected using non-maximum suppression. (Right) An example failure case improved by our audio-visual object tracker. (Top row) best baseline, CSRT in this case, incorrectly latches to the wrong object after collision. (Middle row) our AVOT method continues to correctly track the object post-collision. (Bottom row) ground truth annotated by the experimenter. For clarity, we show the bounding box for only one of the objects being tracked, although the methods track both objects. Please see the Supplementary Video for more demonstration.

ABSTRACT

Existing state-of-the-art object tracking can run into challenges when objects collide, occlude, or come close to one another. These visually based trackers may also fail to differentiate between objects with the same appearance but different materials. Existing methods may stop tracking or incorrectly start tracking another object. These failures are uneasy for trackers to recover from since they often use results from previous frames. By using audio of the impact sounds from object collisions, rolling, etc., our audio-visual object tracking (AVOT) neural network can reduce tracking error and drift. We train AVOT end to end and use audio-visual inputs over all frames. Our audio-based technique may be used in conjunction with other neural networks to augment visually based object detection and tracking methods. We evaluate its runtime frames-per-second (FPS) performance and intersection over union (IoU) performance against OpenCV object tracking implementations and a deep learning method. Our experiments, using the synthetic Sound-20K audio-visual dataset, demonstrate that AVOT outperforms single-modality deep learning methods, when there is audio from object collisions. A proposed scheduler network to switch between AVOT and other methods based on audio onset further maximizes accuracy and performance over all frames in multimodal object tracking


PUBLICATION


AVOT: Audio-Visual Object Tracking of Multiple Objects for Robotics

International Conference on Robotics and Automation (ICRA) 2020
Justin Wilson and Ming C. Lin
@InProceedings{Wilson2020,
  title={AVOT: Audio-Visual Object Tracking of Multiple Objects for Robotics},
  author={Justin Wilson and Ming C. Lin},
  journal={2020 International Conference on Robotics and Automation (ICRA)},
  year={2020}
  }
     

DEMO VIDEO

Video download


DATASETS

Dataset download