A Robust Thermal-Inertial Odometry with Deep ThermalPoint

About TP-TIO

To achieve robust motion estimation in visually degraded environments, thermal odometry has been an attraction in the robotics community. However, most thermal odometry methods are purely based on classical feature extractors, which is difficult to establish robust correspondences in successive frames due to sudden photometric changes and large thermal noise. To solve this problem, we propose ThermalPoint, a lightweight feature detection network specifically tailored for producing keypoints on thermal images, providing notable anti-noise improvements compared with other state-of-the-art methods. After that, we combine ThermalPoint with a novel radiometric feature tracking method, which directly makes use of full radiometric data and establishes reliable correspondences between sequential frames. Finally, taking advantage of an optimization-based visual-inertial framework, a deep feature-based thermal-inertial odometry (TP-TIO) framework is proposed and evaluated thoroughly in various visually degraded environments. Experiments show that our method outperforms state-of-the-art visual and laser odometry methods in smoke-filled environments and achieves competitive accuracy in normal environments.


Zhao, Shibo, et al. “Tp-tio: A robust thermal-inertial odometry with deep thermalpoint.” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020. Paper

  title={Tp-tio: A robust thermal-inertial odometry with deep thermalpoint},
  author={Zhao, Shibo and Wang, Peng and Zhang, Hengrui and Fang, Zheng and Scherer, Sebastian},
  booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},


The authors would like to express sincere thanks to Professor Michael Kaess,Professor Ji Zhang, Dr Shehryar Khattak and Dr Chen Wang for their constructive advice. Meawhile, we would like to thank following great works from which we learned to develop Tp-tio.

KTIO Keyframe‐based thermal–inertial odometry Journal of Field Robotics 37.4 (2020): 552-579.

VINS-MONO A Robust and Versatile Monocular Visual-Inertial State Estimator, Tong Qin, Peiliang Li, Zhenfei Yang, Shaojie Shen, IEEE Transactions on Robotic

GTSAM Georgia Tech Smoothing and Mapping Library


If you have any question or want to contribute this work, please feel free to send email to Shibo Zhao ( Thank you! :)