Deep Learning-Based Exploration Path Planning
AuthorReinhart, Russell E
AltmetricsView Usage Statistics
In this thesis, two deep learning-based path planning methods for autonomous exploration of subterranean environments using aerial robots are presented. One approach utilizes imitation learning, where training samples are generated by a sampling-based state of the art exploration path planner, to construct a model which proposes comparable trajectories to those of the expert planner in many underground tunnel environments. This imitation learning based method uses a small window of recent LiDAR measurements to infer trajectories at a fraction of the computational cost of the expert training planner while also removing the requirement for an online map reconstruction of the environment. The second proposed approach utilizes a deep reinforcement learning algorithm applicable to continuous state and action spaces and partially observed Markov decision processes; the reward for the agent is contingent upon the agent's efficient exploration of the environment. The proposed methods are evaluated in simulated and real-world environments.