1. Camera calibration for the surround-view system: a benchmark and dataset.
- Author
-
Qin, Leidong, Lin, Chunyu, Huang, Shujuan, Yang, Shangrong, and Zhao, Yao
- Subjects
- *
DRIVER assistance systems , *PARAMETER estimation , *RESEARCH personnel , *CAMERAS , *CALIBRATION - Abstract
Surround-view system (SVS) is widely used in the advanced driver assistance system (ADAS). SVS uses four fish-eye lenses to monitor real-time scenes around the vehicle. However, accurate intrinsic and extrinsic parameter estimation is required for the proper functioning of the system. At present, the intrinsic calibration can be pipeline by utilizing checkerboard algorithm, while extrinsic calibration is still immature. Therefore, we proposed a specific calibration pipeline to estimate extrinsic parameters robustly. This scheme takes a driving sequence of four cameras as input. It firstly utilizes lane line to roughly estimate each camera pose. Considering the environmental condition differences in each camera, we separately select strategies from two methods to accurately estimate the extrinsic parameters. To achieve accurate estimates for both front and rear camera, we proposed a method that mutually iterating line detection and pose estimation. As for bilateral camera, we iteratively adjust the camera pose and position by minimizing texture and edge error between ground projections of adjacent cameras. After estimating the extrinsic parameters, the surround-view image can be synthesized by homography-based transformation. The proposed pipeline can robustly estimate the four SVS camera extrinsic parameters in real driving environments. In addition, to evaluate the proposed scheme, we build a surround-view fish-eye dataset, which contains 40 videos with 32,000 frames, acquired from different real traffic scenarios. All the frames in each video are manually labeled with lane annotation, with its GT extrinsic parameters. Moreover, this surround-view dataset could be used by other researchers to evaluate their performance. The dataset will be available soon. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF