Poster
in
Workshop: Workshop on Machine Learning and Compression
EAMQ: Environment-based Adaptive Model Quantization on Federated Reinforcement Learning
YU CHENYUE
Federated Reinforcement Learning (FRL) enables agents to collaboratively train models across distributed environments without sharing raw data. However, existing quantization methods like QuARL, ReLeQ, and VQQL struggle in environments with varying state transitions and rewards, affecting model robustness. In this paper, we introduce Environment-based Adaptive Model Quantization (EAMQ), a method that dynamically adjusts compression ratios based on environmental variability. EAMQ uses a reward-weighted sensitivity analysis to assign lower compression ratios to sensitive parameters in sparse reward environments while applying higher compression in dense reward settings. We also propose a learnable quantization technique that adapts based on a Temporal Difference (TD) loss function. Experiments show that EAMQ outperforms traditional methods across diverse environments, reducing communication and storage costs while maintaining performance, even under heterogeneous conditions.