Advancements in Autonomous Underwater Vehicle Path Planning: A Comparative Analysis of Classical and Machine Learning Approaches
Main Article Content
Abstract
Autonomous Underwater Vehicles (AUVs) have become crucial for ocean exploration, but plotting their optimal course remains a significant challenge. This research aims to address this issue by developing an efficient and intelligent navigation system for AUVs. Current AUV navigation methods rely on traditional approaches, which often struggle with dynamic environments and sensor noise. This research fills the gap by introducing a novel Deep Reinforcement Learning (DRL) framework, which enables AUVs to learn optimal navigation paths in simulated and real-world scenario. Our approach involves training AUVs using DRL in simulated environments, optimizing the reward function, and fine-tuning in real-world scenarios. This research paper compares the performance of classical methods with machine learning (ML) approaches, including reinforcement learning and convolutional neural networks (CNNs). Our results show that DRL significantly improves AUV navigation, enabling them to adapt to dynamic environments and sensor noise. The proposed framework outperforms traditional methods in simulation and real-world tests, demonstrating its potential for efficient and intelligent navigation. This research has significant implications for ocean exploration, enabling AUVs to operate more effectively in complex environments. Future work will focus on integrating our framework with other AI techniques and exploring its applications in various underwater tasks, such as ocean mapping and monitoring.