Back to Search View Original Cite This Article

Abstract

<jats:p>Visual odometry estimates camera motion from a single-camera video. Existing methods suffer from localization errors due to dynamic objects. To enhance segmentation quality, a novel monocular visual odometry approach, i.e., Segmentation Mask Dynamic Elimination, is introduced by incorporating a dual-branch segmentation network to generate masks for feature matching. Additionally, a coordinate attention module is integrated into optical flow and depth networks to improve feature representation, capturing long-range dependencies with precise coordinates. Evaluated on the KITTI dataset, the method achieved translation errors of 5.09% on sequence 09 and 3.22% on sequence 10, with rotation errors of 0.44°/100m and 0.62°/100m, respectively. Compared with other VO methods, the translation error was reduced by approximately 30%, demonstrating stronger robustness.</jats:p>

Show More

Keywords

errors segmentation visual odometry from

Related Articles

PORE

About

Connect