[ Ballroom C ]
An introduction by Isabelle Guyon and Evelyne Viegas followed by posters presentations of competition results with the main organizers.
[ Virtual ]
Given the impressive capabilities demonstrated by pre-trained foundation models, we must now grapple with how to harness these capabilities towards useful tasks. Since many such tasks are hard to specify programmatically, researchers have turned towards a different paradigm: fine-tuning from human feedback. The MineRL BASALT competition aims to spur research on this important class of techniques, in the domain of the popular video game Minecraft.The competition consists of a suite of four tasks with hard-to-specify reward functions.We define these tasks by a paragraph of natural language: for example, "create a waterfall and take a scenic picture of it", with additional clarifying details. Participants train a separate agent for each task, using any method they want; we expect participants will choose to fine-tune the provided pre-trained models. Agents are then evaluated by humans who have read the task description. To help participants get started, we provide a dataset of human demonstrations of the four tasks, as well as an imitation learning baseline that leverages these demonstrations.We believe this competition will improve our ability to build AI systems that do what their designers intend them to do, even when intent cannot be easily formalized. This achievement will allow AI to solve more …
[ Virtual ]
In this competition, participants will address two fundamental causal challenges in machine learning in the context of education using time-series data. The first is to identify the causal relationships between different constructs, where a construct is defined as the smallest element of learning. The second challenge is to predict the impact of learning one construct on the ability to answer questions on other constructs. Addressing these challenges will enable optimisation of students' knowledge acquisition, which can be deployed in a real edtech solution impacting millions of students. Participants will run these tasks in an idealised environment with synthetic data and a real-world scenario with evaluation data collected from a series of A/B tests.
[ Virtual ]
Human intelligence has the remarkable ability to quickly adapt to new tasks and environments. Starting from a very young age, humans acquire new skills and learn how to solve new tasks either by imitating the behavior of others or by following provided natural language instructions. To facilitate research in this direction, we propose IGLU: Interactive Grounded Language Understanding in a Collaborative Environment.The primary goal of the competition is to approach the problem of how to develop interactive embodied agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. Understanding the complexity of the challenge, we split it into sub-tasks to make it feasible for participants. This research challenge is naturally related, but not limited, to two fields of study that are highly relevant to the NeurIPS community: Natural Language Understanding and Generation (NLU/G) and Reinforcement Learning (RL). Therefore, the suggested challenge can bring two communities together to approach one of the important challenges in AI. Another important aspect of the challenge is the dedication to perform a human-in-the-loop evaluation as a final evaluation for the agents developed by contestants.
[ Virtual ]
Meta-learning aims to leverage the experience from previous tasks to solve new tasks using only little training data, train faster and/or get better performance. The proposed challenge focuses on "cross-domain meta-learning" for few-shot image classification using a novel "any-way" and "any-shot" setting. The goal is to meta-learn a good model that can quickly learn tasks from a variety of domains, with any number of classes also called "ways" (within the range 2-20) and any number of training examples per class also called "shots" (within the range 1-20). We carve such tasks from various "mother datasets" selected from diverse domains, such as healthcare, ecology, biology, manufacturing, and others. By using mother datasets from these practical domains, we aim to maximize the humanitarian and societal impact. The competition is with code submission, fully blind-tested on the CodaLab challenge platform. A single (final) submission will be evaluated during the final phase, using ten datasets previously unused by the meta-learning community. After the competition is over, it will remain active to be used as a long-lasting benchmark resource for research in this field. The scientific and technical motivations of this challenge include scalability, robustness to domain changes, and generalization ability to tasks (a.k.a. episodes) …
[ Virtual ]
In this year's Real Robot Challenge, the participants will apply offline reinforcement learning (RL) to robotics datasets and evaluate their policies remotely on a cluster of real TriFinger robots. Usually, experimentation on real robots is quite costly and challenging. For this reason, a large part of the RL community uses simulators to develop and benchmark algorithms. However, insights gained in simulation do not necessarily translate to real robots, in particular for tasks involving complex interaction with the environment. The purpose of this competition is to alleviate this problem by allowing participants to experiment remotely with a real robot - as easily as in simulation. In the last two years, offline RL algorithms became increasingly popular and capable. This year’s Real Robot Challenge provides a platform for evaluation, comparison and showcasing the performance of these algorithms on real-world data. In particular, we propose a dexterous manipulation problem that involves pushing, grasping and in-hand orientation of blocks.
[ Virtual ]
The global trends of urbanization and increased personal mobility force us to rethink the way we live and use urban space. The Traffic4cast competition series tackle this problem in a data driven way, advancing the latest methods in modern machine learning for modelling complex spatial systems over time. This year, our dynamic road graph data combine information from road maps, 10^12 location probe data points, and car loop counters in three entire cities for two years. While loop counters are the most accurate way to capture the traffic volume they are only available in some locations. Traffic4cast 2022 explores models that have the ability to generalize loosely related temporal vertex data on just a few nodes to predict dynamic future traffic states on the edges of the entire road graph. Specifically, in our core challenge we invite participants to predict for three cities the congestion classes known from the red, yellow, or green colouring of roads on a common traffic map for the entire road graph 15min into the future. We provide car count data from spatially sparse loop counters in these three cities in 15min aggregated time bins for one hour prior to the prediction time slot. For our …
[ Virtual ]
Driving SMARTS is a regular competition designed to tackle problems caused by the distribution shift in dynamic interaction contexts that are prevalent in real-world autonomous driving (AD). The proposed competition supports methodologically diverse solutions, such as reinforcement learning (RL) and offline learning methods, trained on a combination of naturalistic AD data and open-source simulation platform SMARTS. The two-track structure allows focusing on different aspects of the distribution shift. Track 1 is open to any method and will give ML researchers with different backgrounds an opportunity to solve a real-world autonomous driving challenge. Track 2 is designed for strictly offline learning methods. Therefore, direct comparisons can be made between different methods with the aim to identify new promising research directions. The proposed setup consists of 1) realistic traffic generated using real-world data and micro simulators to ensure fidelity of the scenarios, 2) framework accommodating diverse methods for solving the problem, and 3) a baseline method. As such it provides a unique opportunity for the principled investigation into various aspects of autonomous vehicle deployment.
[ Virtual ]
The experimental study of neural information processing in the biological visual system is challenging due to the nonlinear nature of neuronal responses to visual input. Artificial neural networks play a dual role in improving our understanding of this complex system, not only allowing computational neuroscientists to build predictive digital twins for novel hypothesis generation in silico, but also allowing machine learning to progressively bridge the gap between biological and machine vision. The mouse has recently emerged as a popular model system to study visual information processing, but no standardized large-scale benchmark to identify state-of-the-art models of the mouse visual system has been established. To fill this gap, we propose the sensorium benchmark competition. We collected a large-scale dataset from mouse primary visual cortex containing the responses of more than 28,000 neurons across seven mice stimulated with thousands of natural images.Using this dataset, we will host two benchmark tracks to find the best predictive models of neuronal responses on a held-out test set. The two tracks differ in whether measured behavior signals are made available or not. We provide code, tutorials, and pre-trained baseline models to lower the barrier for entering the competition. Beyond this proposal, our goal is to keep …
[ Virtual ]
Manual dexterity has been considered one of the critical components for human evolution. The ability to perform movements as simple as holding and rotating an object in the hand without dropping it needs the coordination of more than 35 muscles which, act synergistically or antagonistically on multiple joints. They control the flexion and extension of the joints connecting the bones which in turn allow the manipulation to happen. This complexity in control is markedly different than typical pre-specified movements or torque based controls used in robotics. In this competition - MyoChallenge, participant will develop controllers for a realistic hand to solve a series of dexterous manipulation tasks. Participant will be provided with a physiologically accurate and efficient neuromusculoskeletal human hand model developed in the (free) MuJoCo physics simulator. In addition the provided model has also contact rich capabilities. Participant will be interfacing with a standardized training environment to help build the controllers. The final score will then be based on a environment with unknown parameters.This challenge builds on 3 previous NeurIPS challenge on controlling legs mus- culoskeletal model for locomotion, which attracted about 1300 participants and generated 8000 submissions, which produced 9 academic publications. This chal- lenge will leverage …
[ Virtual ]
Solving vehicle routing problems (VRPs) is an essential task for many industrial applications. While VRPs have been traditionally studied in the operations research (OR) domain, they have lately been the subject of extensive work in the machine learning (ML) community. Both the OR and ML communities have begun to integrate ML into their methods, but in vastly different ways. While the OR community mostly relies on simplistic ML methods, the ML community generally uses deep learning, but fails to outperform OR baselines. To address this gap, this competition, a joint effort of several previous competitions, brings together the OR and ML communities to solve a challenging VRP variant on real-world data provided by ORTEC, a leading provider of vehicle routing software. The challenge focuses on both a `classic' deterministic VRP with time windows (VRPTW) and a dynamic version in which new orders arrive over the course of a day. As a baseline, we will provide a state-of-the-art VRPTW solver and a simple strategy to use it to solve the dynamic variant, thus ensuring that all competition participants have the tools necessary to solve both versions of the problem. We anticipate that the winning method will significantly advance the state-of-the-art for …
[ Virtual ]
As more areas beyond the traditional AI domains (e.g., computer vision and natural language processing) seek to take advantage of data-driven tools, the need for developing ML systems that can adapt to a wide range of downstream tasks in an efficient and automatic way continues to grow. The AutoML for the 2020s competition aims to catalyze research in this area and establish a benchmark for the current state of automated machine learning. Unlike previous challenges which focus on a single class of methods such as non-deep-learning AutoML, hyperparameter optimization, or meta-learning, this competition proposes to (1) evaluate automation on a diverse set of small and large-scale tasks, and (2) allow the incorporation of the latest methods such as neural architecture search and unsupervised pretraining. To this end, we curate 20 datasets that represent a broad spectrum of practical applications in scientific, technological, and industrial domains. Participants are given a set of 10 development tasks selected from these datasets and are required to come up with automated programs that perform well on as many problems as possible and generalize to the remaining private test tasks. To ensure efficiency, the evaluation will be conducted under a fixed computational budget. To ensure robustness, …
[ Virtual ]
Cell segmentation is usually the first step for downstream single-cell analysis in microscopy image-based biology and biomedical research. Deep learning has been widely used for image segmentation, but it is hard to collect a large number of labelled cell images to train models because manually annotating cells is extremely time-consuming and costly. Furthermore, datasets used are often limited to one modality and lacking in diversity, leading to poor generalization of trained models. This competition aims to benchmark cell segmentation methods that could be applied to various microscopy images across multiple imaging platforms and tissue types. We frame the cell segmentation problem as a weakly supervised learning task to encourage models that use limited labelled and many unlabelled images for cell segmentation as unlabelled images are relatively easy to obtain in practice. We will implement a U-Net model as a baseline owing to their established success in biomedical image segmentation. This competition could serve as an important step toward universal and fully automatic cell image analysis tools, greatly accelerating the rate of discovery from image-based biological and biomedical research.
[ Virtual ]
In this workshop, we will hear presentations from winners and competitors in the Multimodal Single-Cell Integration Challenge. For more information about the competition, see our competition page: https://www.kaggle.com/competitions/open-problems-multimodal/
Schedule - all times UTC
Presentation Name Start Stop Team Name Presenter Name
CCompetition Overview 13:00 13:10 Hosts Daniel Burkhardt
First place winner 13:10 13:30 Shuji Suzuki Shuji Suzuki
Third place winner 13:30 13:50 Makotu makoto hyodo
Fifth place 13:50 14:00 Lucky Shake Jeroen Cerpentier
Second place winner 14:00 14:20 senkin & tmp Jin Zhan
Fourth place 14:20 14:30 Oliver Wang Guoxuan Wang
Seventh place 14:30 14:40 chromosom Yury Shapchyts
Eighth place 14:40 14:50 vialactea Fernando Goncalves
Hosts choice 14:50 15:00 Kha | MT | B | Ambros Ambros Marzetta
Top Shake-up 15:00 15:15 One&Only Tianyu Liu
Top Shake-up 15:15 15:30 DANCE Hongzhi Wen
Hosts choice 15:30 15:45 sB2 Alexander Chervov
Wrap Up 15:45 15:50 Hosts Daniel Burkhardt
[ Virtual ]
Reconnaissance Blind Chess (RBC) is like chess except a player cannot see her opponent's pieces in general. Rather, each player chooses a 3x3 square of the board to privately observe each turn. State-of-the-art algorithms, including those used to create agents for previous games like chess, Go, and poker, break down in Reconnaissance Blind Chess for several reasons including the imperfect information, absence of obvious abstractions, and lack of common knowledge. Build the best bot for this challenge in making strong decisions in competitive multi-agent scenarios in the face of uncertainty!
[ Virtual ]
Reinforcement learning has gained popularity as a model-free and adaptive controller for the built-environment in demand-response applications. However, a lack of standardization on previous research has made it difficult to compare different RL algorithms with each other. Also, it is unclear how much effort is required in solving each specific problem in the building domain and how well a trained RL agent will scale up to new environments. The CityLearn Challenge 2022 provides an avenue to address these problems by leveraging CityLearn, an OpenAI Gym Environment for the implementation of RL agents for demand response. The challenge utilizes operational electricity demand data to develop an equivalent digital twin model of the 20 buildings. Participants are to develop energy management agents for battery charge and discharge control in each building with a goal of minimizing electricity demand from the grid, electricity bill and greenhouse gas emissions. We provide a baseline RBC agent for the evaluation of the RL agents performance and rank the participants' according to their solution's ability to outperform the baseline.
[ Virtual ]
AmericasNLP aims to encourage and increase the visibility of research on machine learning approaches for Indigenous languages of the Americas, as, until recently, those have often been overlooked by researchers. For the Second AmericasNLP Competition: Speech-to-Text Translation for Indigenous Languages of the Americas we ask participants to develop or contribute to the development of speech-to-text translation systems for five Indigenous languages of the Americas (Bribri, Guaraní, Kotiria, Quechua and Wa’ikhana), for which available resources are extremely limited. The main task of this competition is speech-to-text translation, and we additionally invite submissions to its two subtasks: automatic speech recognition and text-to-text machine translation.
[ Virtual ]
The study of extra-solar planets, or simply, exoplanets, planets outside our own Solar System, is fundamentally a grand quest to understand our place in the Universe. Discoveries in the last two decades have re-defined what we know about planets, and helped us comprehend the uniqueness of our very own Earth. In recent years, however, the focus has shifted from planet detection to planet characterisation, where key planetary properties are inferred from telescope observations using Monte Carlo-based methods. However, the efficiency of sampling-based methodologies is put under strain by the high-resolution observational data from next generation telescopes, such as the James Webb Space Telescope and the Ariel Space Mission. We propose to host a regular competition with the goal of identifying a reliable and scalable method to perform planetary characterisation. Depending on the chosen track, participants will provide either quartile estimates or the approximate distribution of key planetary properties. They will have access to synthetic spectroscopic data generated from the official simulators for the ESA Ariel Space Mission. The aims of the competition are three-fold. 1) To offer a challenging application for comparing and advancing conditional density estimation methods. 2) To provide a valuable contribution towards reliable and efficient analysis of …
[ Virtual ]
We propose a competition for extracting the meaning and formulation of an optimization problem based on its text description. For this competition, we have created the first dataset of linear programming (LP) word problems. A deep understanding of the problem description is an important first step towards generating the problem formulation. Therefore, we present two challenging sub-tasks for the participants. For the first sub-task, the goal is to recognize and label the semantic entities that correspond to the components of the optimization problem. For the second sub-task, the goal is to generate a meaning representation (i.e. a logical form) of the problem from its description and its problem entities. This intermediate representation of an LP problem will be converted to a canonical form for evaluation. The proposed task will be attractive because of its compelling application, the low-barrier to entry of the first sub-task, and the new set of challenges the second sub-task brings to semantic analysis and evaluation. The goal of this competition is to increase the access and usability of optimization solvers, allowing non-experts to solve important problems from various industries. In addition, this new task will promote the development of novel machine learning applications and datasets for …
[ Virtual ]
The Weather4cast NeurIPS Competition has high practical impact for society: Unusual weather is increasing all over the world, reflecting ongoing climate change, and affecting communities in agriculture, transport, public health and safety, etc.Can you predict future rain patterns with modern machine learning algorithms? Apply spatio-temporal modelling to complex dynamic systems. Get access to unique large-scale data and demonstrate temporal and spatial transfer learning under strong distributional shifts.We provide a super-resolution challenge of high relevance to local events: Predict future weather as measured by ground-based hi-res rain radar weather stations.In addition to movies comprising rain radar maps you get large-scale multi-band satellite sensor images for exploiting data fusion.Winning models will advance key areas of methods research in machine learning, of relevance beyond the application domain.
[ Virtual ]
We propose the Habitat Rearrangement Challenge. Specifically, a virtual robot (Fetch mobile manipulator) is spawned in a previously unseen simulation environment and asked to rearrange objects from initial to desired positions -- picking/placing objects from receptacles (counter, sink, sofa, table), opening/closing containers (drawers, fridges) as necessary. The robot operates entirely from onboard sensing -- head- and arm-mounted RGB-D cameras, proprioceptive joint-position sensors (for the arm), and egomotion sensors (for the mobile base) -- and may not access any privileged state information (no prebuilt maps, no 3D models of rooms or objects, no physically-implausible sensors providing knowledge of mass, friction, articulation of containers). This is a challenging embodied AI task involving embodied perception, mobile manipulation, sequential decision making in long-horizon tasks, and (potentially) deep reinforcement and imitation learning. Developing such embodied intelligent systems is a goal of deep scientific and societal value, including practical applications in home assistant robots.
[ Virtual ]
A growing concern for the security of ML systems is the possibility for Trojan attacks on neural networks. There is now considerable literature for methods detecting these attacks. We propose the Trojan Detection Challenge to further the community's understanding of methods to construct and detect Trojans. This competition will consist of complimentary tracks on detecting/analyzing Trojans and creating evasive Trojans. Participants will be tasked with devising methods to better detect Trojans using a new dataset containing over 6,000 neural networks. Code and evaluations from three established baseline detectors will provide a starting point, and a novel Minimal Trojan attack will challenge participants to push the state-of-the-art in Trojan detection. At the end of the day, we hope our competition spurs practical innovations and clarifies deep questions surrounding the offense-defense balance of Trojan attacks.
[ Virtual ]
Advancements to renewable energy processes are needed urgently to address climate change and energy scarcity around the world. Many of these processes, including the generation of electricity through fuel cells or fuel generation from renewable resources are driven through chemical reactions. The use of catalysts in these chemical reactions plays a key role in developing cost-effective solutions by enabling new reactions and improving their efficiency. Unfortunately, the discovery of new catalyst materials is limited due to the high cost of computational atomic simulations and experimental studies. Machine learning has the potential to significantly reduce the cost of computational simulations by orders of magnitude. By filtering potential catalyst materials based on these simulations, candidates of higher promise may be selected for experimental testing and the rate at which new catalysts are discovered could be greatly accelerated.The 2nd edition of the Open Catalyst Challenge invites participants to submit results of machine learning models that simulate the interaction of a molecule on a catalyst's surface. Specifically, the task is to predict the energy of an adsorbate-catalyst system in its relaxed state starting from an arbitrary initial state. From these values, the catalyst's impact on the overall rate of a chemical reaction may be …
[ Virtual ]
Efficient post-consumer waste recycling is one of the key challenges of modern society, as countries struggle to find sustainable solutions to rapidly rising waste levels and avoid increased soil and sea pollution. The US is one of the leading countries in waste generation by volume but recycles less than 35% of its recyclable waste. Recyclable waste is sorted according to material type (paper, plastic, etc.) in material recovery facilities (MRFs) which still heavily rely on manual sorting. Computer vision solutions are an essential component in automating waste sorting and ultimately solving the pollution problem.In this sixth iteration of the VisDA challenge, we introduce a simulation-to-real (Sim2Real) semantic image segmentation competition for industrial waste sorting. We aim to answer the question: can synthetic data augmentation improve performance on this task and help adapt to changing data distributions? Label-efficient and reliable semantic segmentation is essential for this setting, but differs significantly from existing semantic segmentation datasets: waste objects are typically severely deformed and randomly located, which limits the efficacy of both shape and context priors, and have long tailed distributions and high clutter. Synthetic data augmentation can benefit such applications due to the difficulty in obtaining labels and rare categories. However, new …
[ Virtual ]
Enabling effective and efficient machine learning (ML) over large-scale graph data (e.g., graphs with billions of edges) can have a huge impact on both industrial and scientific applications. At KDD Cup 2021, we organized the OGB Large-Scale Challenge (OGB-LSC), where we provided large and realistic graph ML tasks. Our KDD Cup attracted huge attention from graph ML community (more than 500 team registrations across the globe), facilitating innovative methods being developed to yield significant performance breakthrough. However, the problem of machine learning over large graphs is not solved yet and it is important for the community to engage in a focused multi-year effort in this area (like ImageNet and MS-COCO). Here we propose an annual ML challenge around large-scale graph datasets, which will drive forward method development and allow for tracking progress. We propose the 2nd OGB-LSC (referred to as OGB-LSC 2022) around the OGB-LSC datasets. Our proposed challenge consists of three tracks, covering core graph ML tasks of node-level prediction (academic paper classification with 240 million nodes), link-level prediction (knowledge graph completion with 90 million entities), and graph-level prediction (molecular property prediction with 4 million graphs). Importantly, we have updated two out of the three datasets based on the …
[ Virtual ]
Neural MMO is an open-source environment for agent-based intelligence research featuring large maps with large populations, long time horizons, and open-ended multi-task objectives. We propose a benchmark on this platform wherein participants train and submit agents to accomplish loosely specified goals -- both as individuals and as part of a team. The submitted agents are evaluated against thousands of other such user submitted agents. Participants get started with a publicly available code base for Neural MMO, scripted and learned baseline models, and training/evaluation/visualization packages. Our objective is to foster the design and implementation of algorithms and methods for adapting modern agent-based learning methods (particularly reinforcement learning) to a more general setting not limited to few agents, narrowly defined tasks, or short time horizons. Neural MMO provides a convenient setting for exploring these ideas without the computational inefficiency typically associated with larger environments.