Learning a new task often requires exploration: gathering data to learn about the environment and how to solve the task. But how do we efficiently explore, and how can an agent make the best use of prior knowledge it has about the environment? Meta-reinforcement learning allows us to learn inductive biases for exploration from data, which plays a crucial role in enabling agents to rapidly pick up new tasks. In the first part of this talk, I look at different meta-learning problem settings that exist in the literature, and what type of exploratory behaviour is necessary in these settings. This generally depends on how much time the agent has to interact with the environment, before its performance is evaluated. In the second part of the talk, we take a step back and consider how to meta-learn exploration strategies in the first place, which might require a different type of exploration during meta-learning. Throughout the talk, I will focus on the "online adaptation" setting where the agent has to perform well from the very first time step in a new environment. In these settings the agent has to very carefully trade off exploration and exploitation, since each action counts towards its final performance.