Skip to yearly menu bar Skip to main content


Poster

Spectral-Risk Safe Reinforcement Learning with Convergence Guarantees

Dohyeong Kim · Taehyun Cho · Seungyub Han · Hojun Chung · Kyungjae Lee · Songhwai Oh

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The field of risk-constrained reinforcement learning (RCRL) has been developed to effectively reduce the likelihood of worst-case scenarios by explicitly handling risk-measure-based constraints.However, the nonlinearity of risk measures makes it challenging to achieve convergence and optimality.To overcome the difficulties posed by the nonlinearity, we propose a spectral risk measure-constrained RL algorithm, spectral-risk-constrained policy optimization (SRCPO), a bilevel optimization approach that utilizes the duality of spectral risk measures.In the bilevel optimization structure, the outer problem involves optimizing dual variables derived from the risk measures, while the inner problem involves finding an optimal policy given these dual variables.The proposed method, to the best of our knowledge, is the first to guarantee convergence to an optimum in the tabular setting.Furthermore, the proposed method has been evaluated on continuous control tasks and showed the best performance among other RCRL algorithms satisfying the constraints.

Live content is unavailable. Log in and register to view live content