The increasing prevalence of high-fidelity industrial digital twins is providing a range of opportunities to apply Reinforcement Learning techniques outside of traditional academic examples and video games. While this trend is now well-established, most RL developments and deployments in the real world are done on an ad-hoc basis with little consideration given to how to repeat and scale similar initiatives in an efficient way.
In this session we will address these shortcomings and illustrate them through our experience in optimising the design of a state-of-the-art sailing boat for a prominent competition with RL. Building an agent to control the boat is a very complex RL task for several reasons: imperfect information, loosely defined goals with delayed rewards, highly dynamic state and action spaces. In racing conditions, it takes a team of Olympic-level athletes to sail the boat and make it “fly” thanks to its underwater wings (read hydro-foiling).
Beyond describing solutions to traditional RL considerations such as the learning algorithm construct and reward function, we will focus on the underlying workflows and technology stack required to carry out a project of this technical complexity in a scalable way. We will use facets of Software 2.0, such as higher-level APIs and the automation of end-to-end model development tasks, to highlight our iterative choices and the optimisation opportunities along the machine learning pipeline and ultimately the production system.