Poster
Privacy-Preserving Q-Learning with Functional Noise in Continuous Spaces
Baoxiang Wang · Nidhi Hegde
East Exhibition Hall B, C #96
Keywords: [ Reinforcement Learning ] [ Reinforcement Learning and Planning ] [ Applications ] [ Privacy, Anonymity, and Security ]
We consider differentially private algorithms for reinforcement learning in continuous spaces, such that neighboring reward functions are indistinguishable. This protects the reward information from being exploited by methods such as inverse reinforcement learning. Existing studies that guarantee differential privacy are not extendable to infinite state spaces, as the noise level to ensure privacy will scale accordingly to infinity. Our aim is to protect the value function approximator, without regard to the number of states queried to the function. It is achieved by adding functional noise to the value function iteratively in the training. We show rigorous privacy guarantees by a series of analyses on the kernel of the noise space, the probabilistic bound of such noise samples, and the composition over the iterations. We gain insight into the utility analysis by proving the algorithm's approximate optimality when the state space is discrete. Experiments corroborate our theoretical findings and show improvement over existing approaches.
Live content is unavailable. Log in and register to view live content