Sub-Goal Trees – a Framework for Goal-Based Reinforcement Learning

by   Tom Jurgenson, et al.

Many AI problems, in robotics and other domains, are goal-based, essentially seeking trajectories leading to various goal states. Reinforcement learning (RL), building on Bellman's optimality equation, naturally optimizes for a single goal, yet can be made multi-goal by augmenting the state with the goal. Instead, we propose a new RL framework, derived from a dynamic programming equation for the all pairs shortest path (APSP) problem, which naturally solves multi-goal queries. We show that this approach has computational benefits for both standard and approximate dynamic programming. Interestingly, our formulation prescribes a novel protocol for computing a trajectory: instead of predicting the next state given its predecessor, as in standard RL, a goal-conditioned trajectory is constructed by first predicting an intermediate state between start and goal, partitioning the trajectory into two. Then, recursively, predicting intermediate points on each sub-segment, until a complete trajectory is obtained. We call this trajectory structure a sub-goal tree. Building on it, we additionally extend the policy gradient methodology to recursively predict sub-goals, resulting in novel goal-based algorithms. Finally, we apply our method to neural motion planning, where we demonstrate significant improvements compared to standard RL on navigating a 7-DoF robot arm between obstacles.


page 7

page 8

page 22

page 25

page 26


Sub-Goal Trees -- a Framework for Goal-Directed Trajectory Prediction and Optimization

Many AI problems, in robotics and other domains, are goal-directed, esse...

Probabilistic RRT Connect with intermediate goal selection for online planning of autonomous vehicles

Rapidly Exploring Random Trees (RRT) is one of the most widely used algo...

Goal-Conditioned Supervised Learning with Sub-Goal Prediction

Recently, a simple yet effective algorithm – goal-conditioned supervised...

Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning

Standard planners for sequential decision making (including Monte Carlo ...

PBCS : Efficient Exploration and Exploitation Using a Synergy between Reinforcement Learning and Motion Planning

The exploration-exploitation trade-off is at the heart of reinforcement ...

Online Reinforcement Learning Control by Direct Heuristic Dynamic Programming: from Time-Driven to Event-Driven

In this paper time-driven learning refers to the machine learning method...

CLAMGen: Closed-Loop Arm Motion Generation via Multi-view Vision-Based RL

We propose a vision-based reinforcement learning (RL) approach for close...

Please sign up or login with your details

Forgot password? Click here to reset