Abstract
<jats:p>Deep reinforcement learning (DRL) has emerged as a prominent framework in the field of autonomous robot navigation, enabling agents to acquire complex decision-making capabilities and learn optimal policies through continuous interaction with their environment. This chapter provides a comprehensive review of deep reinforcement learning (DRL) in recent robot navigation research within real-time dynamic environments, addressing the gap caused by the limited existing reviews in this area. It begins with fundamental concepts, highlights current trends, discusses key challenges, and concludes with insights into future research directions. Current studies emphasize a shift from static to dynamic environments, improvements in sample efficiency, integration with visual perception, multi-agent systems, multi-objective navigation, and bridging the gap between simulation and real-world applications. These trends underscore the importance of enhancing robot adaptability, learning efficiency, robustness, and scalability, enabling robots to reach their targets while avoiding obstacles effectively. Significant challenges remain, including handling continuous action spaces, designing effective reward functions to balance exploration and exploitation, and addressing learning issues in both dynamic and real-world settings. These challenges will be examined in detail within this review. Furthermore, the chapter will explore future research directions, such as addressing dynamic and actively changing obstacle configurations, integrating DRL with other artificial intelligence techniques, improving learning efficiency across varying scales, and developing strategies for cooperative multi-agent systems. Throughout this review, key limitations and research gaps are identified, with the aim of advancing toward more autonomous, reliable, and scalable DRL-based navigation systems capable of operating effectively and efficiently in real-time environments.</jats:p>