Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematics of Modern Machine Learning (M3L)

Information-Theoretic Generalization Bounds for Batch Reinforcement Learning

Xingtu Liu

Keywords: [ Reinforcement Learning; Learning Theory; Generalization; Mutual Information ]


Abstract:

We analyze the generalization properties of batch reinforcement learning (batch RL) with value function approximation from an information-theoretic perspective. We derive generalization bounds for batch RL using (conditional) mutual information. In addition, we demonstrate how to establish a connection between certain structural assumptions on the value function space and conditional mutual information. As a by-product, we derive a \textit{high-probability} generalization bound via conditional mutual information, which was left open in \cite{steinke2020reasoning} and may be of independent interest.

Chat is not available.