Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Goal-Conditioned Reinforcement Learning

Automata Conditioned Reinforcement Learning with Experience Replay

Niklas Lauffer · Beyazit Yalcinkaya · Marcell Vazquez-Chanlatte · Sanjit Seshia

Keywords: [ Reinforcement Learning ] [ formal methods ] [ goal-conditioned reinforcement learning ] [ experience replay ]


Abstract:

We explore the problem of goal-conditioned reinforcement learning (RL) where goals are represented using deterministic finite state automata (DFAs). Due to the sparse and binary nature of automata-based goals, we hypothesize that experience replay can help an RL agent learn more quickly in this setting. To enable the use of experience replay, we use an end-to-end neural policy, including a graph neural network (GNN) to summarize the DFA goal before feeding it to a policy network. Experimental results in a gridworld domain demonstrate the efficacy of the model architecture and highlight the significant role of experience replay in enhancing the learning speed and reducing the variance of RL agents for DFA tasks.

Chat is not available.