TechRxiv

Targetless Lidar-camera Calibration via Cross-modality Structure Consistency

Version 5 2023-10-25, 20:46
Version 4 2023-10-12, 02:28
Version 3 2023-09-05, 15:56
Version 2 2023-09-01, 19:03
Version 1 2023-08-23, 02:49
preprint
posted on 2023-10-25, 20:46 authored by Ni OuNi Ou

Lidar and cameras serve as essential sensors for automated vehicles and intelligent robots, and they are frequently fused in complicated tasks. Precise extrinsic calibration is the prerequisite of Lidar-camera fusion. Hand-eye calibration is almost the most commonly used targetless calibration approach. This paper presents a particular degeneration problem of hand-eye calibration when sensor motions lack rotation. This context is common for ground vehicles, especially those traveling on urban roads, leading to a significant deterioration in translational calibration performance. To address this problem, we propose a novel motion-based Lidar-camera calibration framework based on cross-modality structure consistency. It is globally convergent within the specified search range and can achieve satisfactory translation calibration accuracy in degenerate scenarios. To verify the effectiveness of our framework, we compare its performance to one motion-based method and two appearance-based methods using six Lidar-camera data sequences from the KITTI dataset. Additionally, an ablation study is conducted to demonstrate the effectiveness of each module within our framework.


Our codes are now available on github for reproduction.


Funding

National Natural Science Foundation of China under Grant 62173038

History

Email Address of Submitting Author

3120205431@bit.edu.cn

ORCID of Submitting Author

0000-0002-2643-8989

Submitting Author's Institution

Beijing Institute of Technology

Submitting Author's Country

  • China

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC