1. Multiple object tracking with segmentation and interactive multiple model.
- Author
-
Qi, Ke, Xu, Wenhao, Chen, Wenbin, Tao, Xi, and Chen, Peijia
- Subjects
- *
OBJECT tracking (Computer vision) , *COMPUTER vision , *KALMAN filtering , *PIXELS , *OBJECT recognition (Computer vision) - Abstract
Multiple object tracking (MOT) is a sophisticated computer vision task that aims to detect and track the trajectories of all objects within a given scene. MOT necessitates the establishment of unique identifiers for each object in the scene and currently, the majority of MOT works adopt tracking-by-detection, using re-identification techniques to associate objects based on appearance or motion features. However, traditional MOT methods may yield suboptimal results when appearance features are unreliable or geometric features are confounded by irregular motions. Consequently, this paper proposes a cascaded matching tracker called IMMSegTrack, which replaces detection boxes with segmentation contours for IoU matching and generates convincing prediction outcomes by blending multiple adaptive Kalman filter models. With the guide of interactive multiple model and pixel-level matching, better performance could be achieved through well-designed cascaded association. Our experimental innovations mainly focus on the prediction and matching aspects within the MOT framework, while tests on DanceTrack and SoccerNet datasets have been conducted using the original models of YOLOv8 to obtain desired results. The relevant codes for the experiments are available at https://github.com/xwh129/IMMSegTrack.git. In addition, it is considerable to retrain model weights and expand the experiments to more datasets for better performance. • Employment of IMM to perform better in irregular motion estimations. • Three-stages cascaded matching to achieve reasonable associations. • Utilizing contours to achieve pixel-level comparisons in IOU matching. • Removals of trajectories on the brim to cut down redundant cached frames. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF