Poster
in
Workshop: Workshop on Open-World Agents: Synnergizing Reasoning and Decision-Making in Open-World Environments (OWA-2024)
Thermal and Energy Management with Fan Control Through Offline Meta-Reinforcement Learning
Shao-Yu Yen · Yen Lai · Fu-Chieh Chang · Pei-Yuan Wu
Keywords: [ Reinforcement Learning; Control System; Open World Agents ]
Reinforcement learning has garnered significant attention across various fields, including computer vision, natural language processing, and robotics. In this work, we explore the potential of applying reinforcement learning to open-world agents through an empirical study of three distinct offline meta-reinforcement learning approaches for fan control, with a focus on thermal and energy management. Our models enable adaptive fan speed control, which not only protects devices from overheating but also effectively reduces power consumption. To better evaluate the performance in open-world scenarios, we go beyond the industry-standard steady-state test by conducting a CPU-stress test that simulates a more dynamic and unpredictable deployment environment. Compared to commercially available techniques, our solution achieves up to a 21\% reduction in power consumption on a real 2U-server under the worst thermal conditions. This approach demonstrates the broader applicability of meta-reinforcement learning in the thermal and energy management of server systems, particularly in open-world settings.