Poster
in
Workshop: D3S3: Data-driven and Differentiable Simulations, Surrogates, and Solvers
Learning Generative Interactive Environments By Trained Agent Exploration
Naser Kazemi · Nedko Savov · Danda Pani Paudel · Luc V Gool
Keywords: [ world models ] [ generative models ] [ action prediction ] [ environment simulators ] [ autonomous agents ]
World models are increasing in importance for interpreting and simulating the rules and actions of complex environments. Genie, a recent model, excels at learning from visually diverse environments but relies on costly human-collected data. We observe that their alternative method of using random agents is too limited to explore the environment. We propose to improve the model by employing reinforcement learning based agents for data generation. This approach produces diverse datasets that enhance the model's ability to adapt and perform well across various scenarios and realistic actions within the environment. In this paper, we first build, evaluate and release the model GenieRedux - a complete reproduction of Genie. Additionally, we introduce GenieRedux-G, a variant that uses the agent's readily available actions to factor out action prediction uncertainty during validation. Our evaluation, including a replication of the Coinrun case study, shows that GenieRedux-G achieves superior visual fidelity and controllability using the trained agent exploration. The proposed approach is reproducable, scalable and adaptable to new types of environments. Our codebase is available at https://github.com/insait-institute/GenieRedux.