Real-time underwater StereoFusion

Matija Rossi, Petar Trslic, Satja Sivcev, James Riordan, Daniel Toal, Gerard Dooly

Research output: Contribution to journalArticle

5 Citations (Scopus)
78 Downloads (Pure)

Abstract

Many current and future applications of underwater robotics require real-time sensing and interpretation of the environment. As the vast majority of robots are equipped with cameras, computer vision is playing an increasingly important role in this field. This paper presents the implementation and experimental results of underwater StereoFusion, an algorithm for real-time 3D dense reconstruction and camera tracking. Unlike KinectFusion on which it is based, StereoFusion relies on a stereo camera as its main sensor. The algorithm uses the depth map obtained from the stereo camera to incrementally build a volumetric 3D model of the environment, while simultaneously using the model for camera tracking. It has been successfully tested both in a lake and in the ocean, using two different state-of-the-art underwater Remotely Operated Vehicles (ROVs). Ongoing work focuses on applying the same algorithm to acoustic sensors, and on the implementation of a vision based monocular system with the same capabilities.
Original languageEnglish
Article number3936
Number of pages17
JournalSensors
Volume18
Issue number11
Early online date14 Nov 2018
DOIs
Publication statusE-pub ahead of print - 14 Nov 2018

Keywords

  • stereo
  • underwater
  • ROV
  • GPU
  • real-time
  • 3D
  • fusion
  • camera
  • tracking
  • vision

Fingerprint Dive into the research topics of 'Real-time underwater StereoFusion'. Together they form a unique fingerprint.

  • Cite this

    Rossi, M., Trslic, P., Sivcev, S., Riordan, J., Toal, D., & Dooly, G. (2018). Real-time underwater StereoFusion. Sensors, 18(11), [3936]. https://doi.org/10.3390/s18113936