Skip to yearly menu bar Skip to main content


NIPS 2018 Competition Track

This is the second NIPS edition on "NIPS Competitions". We received 21 competition proposals related to data-driven and live competitions on different aspects of NIPS. Proposals were reviewed by several high qualified researchers and experts in challenges organization. Eight top-scored competitions were accepted to be run and present their results during the NIPS 2018 Competition track days. Evaluation was based on the quality of data, problem interest and impact, promoting the design of new models, and a proper schedule and managing procedure. Below, you can find the eight accepted competitions. Please visit each competition webpage to read more about the competition, its schedule, and how to participate. Each competition has its own schedule defined by its organizers. The results of the competitions, including organizers and top ranked participants talks will be presented during the 2 Competition track days at NIPS 2018. Organizers and participants will be invited to submit their contribution as a book chapter to the upcoming NIPS 2018 Competition book, within Springer Series in Challenges in Machine Learning.

 
Competition Start date (2018) End date (2018) Prize

AutoML for Lifelong Machine Learning

July 23st Nov 6th

1st Place $10,000

2nd Place $3,000

3rd Place $2,000

Adversarial Vision Challenge July 2nd Oct 10th  

The Conversational Intelligence Challenge 2 (ConvAI2)

 

March 21st Sep 30th
1st Place
$20,000 of MTurk funding

Tracking Machine Learning Challenge

Sep 7th

March 12th, 2019

First phase results already posted

1st Place $7,000

2nd Place $5,000

3rd Place $3,000

Jury prizes : NVIDIA V100 GPU, 2 travel grant to NIPS and CERN

Pommerman June 1st Nov 21th 1st Place $4,000, 6k GCE credit
2nd Place $2,000, 4k GCE credit
3rd Place $1,000, 2k GCE credit
4th Place $3k GCE credit
Top two learning agents: 1 NVIDIA Titan V GPU

InclusiveImages: A challenge of distributional skew, side information, and global inclusion

Sep 5th Nov 9th  
The AI Driving Olympics Oct 1st

Dec 1th

Final at live event, 8th Dec.

1st Place $5,000 AWS Credits

2nd Place $2,500 AWS Credits

3rd Place $1,000 AWS Credits

AI for prosthetics

June 1st Sep 30th
1st Place: 2 x NVIDIA Titan V, travel grants to NIPS, EPFL, Stanford
2nd Place: 1 x NVIDIA Titan V
3rd Place: 1 x NVIDIA Titan V
top 400 participants by 08/15: $250 Google Cloud Credits
 

 

More details below!

AutoML for Lifelong Machine Learning

Competition summary: In many real-world machine learning applications, AutoML is strongly needed due to the limited machine learning expertise of developers. Moreover, batches of data in many real-world applications may be arriving daily, weekly, monthly, or yearly, for instance, and the data distributions are changing relatively slowly over time. This presents a continuous learning, or Lifelong Machine Learning challenge for an AutoML system. Typical learning problems of this kind include customer relationship management, on-line advertising, recommendation, sentiment analysis, fraud detection, spam filtering, transportation monitoring, econometrics, patient monitoring, climate monitoring, and manufacturing and so on. In this competition, which we are calling AutoML for Lifelong Machine Learning, large scale datasets collected from some of these real-world applications will be used. Compared with previous AutoML competitions(http://automl.chalearn.org/), the focus of this competition is on drifting concepts, getting away from the simpler i.i.d. cases. Participants are invited to design a computer program capable of autonomously (without any human intervention) developing predictive models that are trained and evaluated in a in lifelong machine learning setting.

Organizers:

Wei-Wei Tu, 4Paradigm Inc., China, tuww.cn@gmail.com

Hugo Jair Escalante, INAOE (Mexico), ChaLearn (USA), hugo.jair@gmail.com Isabelle Guyon, UPSud/INRIA Univ. Paris-Saclay, France & ChaLearn, USA, guyon@clopinet.com Daniel L. Silver, Acadia University, Canada, danny.silver@acadiau.ca Evelyne Viegas, Microsoft, USA, evelynev@microsoft.com Yuqiang Chen, 4Paradigm Inc., China, chenyuqiang@4paradigm.com Qiang Yang, 4Paradigm Inc., China,qyang@cse.ust.hk

Webpage: https://www.4paradigm.com/competition/nips2018

 

Adversarial Vision Challenge

Competition summary: This challenge is designed to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. Modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications. In a robust network no attack should be able to find imperceptible adversarial perturbations. We thus propose to facilitate an open competition between neural networks and a large variety of strong attacks, including ones that did not exist at the time when the networks have been proposed. To this end the competition has one track for robust vision models as well as one track for targeted and one for untargeted adversarial attacks. Submitted models and attacks are continuously pitted against each other on an image classification task. Attacks are able to observe the decision of models on a restricted number of self-defined inputs in order to craft model-specific minimal adversarial examples.

Organizers:

Wieland Brendel, University of T\xfcbingen, wieland@bethgelab.org
Jonas Rauber, University of T\xfcbingen, jonas@bethgelab.org
Alexey Kurakin, Google Brain, kurakin@google.com
Nicolas Papernot, Pennsylvania State University & Google Brain,ngp5056@cse.psu.edu
Behar Veliqi,University of T\xfcbingen, behar.veliqi@bethgelab.org
Marcel Salathe, Ecole Polytechnique Federale de Lausanne, marcel.salathe@epfl.ch
Sharada P. Mohanty, Ecole Polytechnique Federale de Lausanne, sharada.mohanty@epfl.ch
Matthias Bethge, University of T\xfcbingen,matthias@bethgelab.org

Webpage: https://www.crowdai.org/challenges/adversarial-vision-challenge

 

The Conversational Intelligence Challenge 2 (ConvAI2)

Competition summary: There are currently few datasets appropriate for training and evaluating models for non-goal-oriented dialogue systems (chatbots); and equally problematic, there is currently no standard procedure for evaluating such models. Our competition aims to establish a concrete scenario for testing chatbots that engage humans, and become a standard evaluation tool in order to make such systems directly comparable.This is the second Conversational Intelligence (ConvAI) Challenge. This year we introduce several improvements: a) providing a dataset from the beginning, Persona-Chat, b) making the conversations more engaging for humans, c) simpler evaluation process (automatic evaluation, followed then by human evaluation). Persona-Chat isdesigned to facilitate research into alleviating some of the issues that traditional chit-chat models face. Thetraining set consists of conversations between crowdworkers who were randomly paired and asked to act the part of a given provided persona (randomly assigned, and created by another set of crowdworkers). The paired workers were asked to chat naturally and to get to know each other during the conversation. This produces interesting and engaging conversations that learning agents can try to mimic. Models are thus trained to both ask and answer questions about personal topics, and the resulting dialogue can take account of the personas of the speaking partners. Competitors\' models will then be compared in three ways: (i) automated evaluation metrics on a new test set hidden from the competitors; (ii) evaluation on Amazon Mechanical Turk; and (iii) `wild\' live evaluation by volunteers having conversations with the bots. The winning dialogue systems will be chosen based on these scores.

Organizers:

Mikhail Burtsev,Moscow Institute of Physics,burtsev.m@gmail.com
Varvara Logacheva,Moscow Institute of Physics,varvara.logacheva@gmail.com
Valentin Malykh,Moscow Institute of Physics,valentin@maly.hk
Iulian Serban,University of Montreal,julianserban@gmail.com
Ryan Lowe, McGill University,lowe.ryan.t@gmail.com
Shrimai Prabhumoye,Carnegie Mellon University,sprabhum@andrew.cmu.edu
Alan W Black,Carnegie Mellon University,awb@cs.cmu.edu
Alexander Rudnicky,Carnegie Mellon University,air@cs.cmu.edu
Jason Williams, Microsoft Research,jason.williams@microsoft.com
Yoshua Bengio,University of Montreal,yoshua.umontreal@gmail.com
Joelle Pineau,Facebook AI Research & McGill University,jpineau@fb.com
Emily Dinan, Facebook AI Research,edinan@fb.com
Douwe Kiela, Facebook AI Research,dkiela@fb.com
Alexander Miller, Facebook AI Research, ahm@fb.com
Kurt Shuster, Facebook AI Research,kshuster@fb.com
Arthur Szlam, Facebook AI Research, aszlam@fb.com
Jack Urbanek, Facebook AI Research, jju@fb.com
Jason Weston, Facebook AI Research, jase@fb.com
 

Webpage: http://convai.io/

 

Tracking Machine Learning Challenge

Competition summary: In the footstep of the Higgs (https://www.kaggle.com/c/higgs-boson) and the flavor of physics (https://www.kaggle.com/c/flavours-of-physics) challenges, data science is being asked for provide novel ideas on how to make science advance. Particle track reconstruction is at the heart of the data processing of the experiments at CERN, and a challenging computational exercise. Contrary to first impression, clustering hundreds of thousands of sparsely 3D points into helicoidal tracks of 10-15 points is non-trivial due to combinatorial explosion during particle following. In order to fully extract the potential of collider data, and enable future scientific discoveries you will have to overcome this throughput oriented challenge and provide solutions that ran within seconds on hundreds of thousands of points. This truly unique challenge will require all your creativity and computing skills to master. In addition, the submissions will be evaluated by a jury (composed of computer scientists and High Energy Physics tracking experts) to highlight the contributions most promising to the field. A special prize (NVidia V100) from our sponsor will be attributed. Invitation to NIPS 2018 and a grand finale workshop at CERN in spring 2019 for winners and jury\u2019s pick.

Organizers:

David Rousseau, LAL, rousseau@lal.in2p3.fr
Sabrina Amrouche, UNIGE, c.amrouche@cern.ch
Paolo Calafiura, LBNL, pcalafiura@lbl.gov
Steve Farrell, LBNL, sfarrell@lbl.gov
Cecile Germain, UPsud & INRIA, cecile.germain@lri.fr
Vladimir Gligorov, LPNHE, vgligoro@lpnhe.in2p3.fr
Tobias Golling, UNIGE, tobias.golling@unige.ch
Heather Gray, LBNL, hgray@lbl.gov
Isabelle Guyon, UPsud & INRIA, Chalearn, guyon@clopinet.com
Mikhail Hushchyn, NRU HSE, mikhail.hushchyn@cern.ch
Vincenzo Innocente, CERN, vincenzo.innocente@cern.ch
Moritz Kiehn, UNIGE, msmk@cern.ch
Andreas Salzburger, CERN, andreas.salzburger@cern.ch
Andrey Ustyuzhanin, NRU HSE, andrey.ustyuzhanin@cern.ch
Jean-Roch Vlimant, Caltech, jvlimant@caltech.edu
Yetkin Yilmaz, LAL, yetkinyilmaz@gmail.com

Webpage: https://sites.google.com/site/trackmlparticle/

 

Pommerman

Competition summary: Train a team of communicative agents to play Bomberman. Compete against other teams.

Organizers:

Cinjon Resnick, NYU, cinjon@nyu.edu
David Ha, Google Brain,hadavid@google.com
Denny Britz, Prediction Machines, dennybritz@gmail.com
Jakob Foerster, Oxford,jakobfoerster@gmail.com
Jason Weston, Facebook FAIR,jase@fb.com
Joan Bruna, NYU,bruna@cims.nyu.edu
Julian Togelius, NYU,julian@togelius.com

Kyunghyun Cho, NYU,kyunghyun.cho@nyu.edu

Webpage: https://www.pommerman.com/

 

InclusiveImages: A challenge of distributional skew, side information, and global inclusion

Competition summary: Questions surrounding machine learning fairness and inclusivity have attracted heightened attention in recent years, leading to a rapid emergence of a full area of research within the field of machine learning. To provide additional empirical grounding and a venue for head-to-head comparison of new methods, the InclusiveImages competition encourages researchers to develop modeling techniques that reduce the biases that may be encoded in large data sets. In particular, this competition is focused on the challenge of geographic skew encountered when the geographic distribution of training images does not fully represent levels of diversity encountered at test or inference time.

Organizers:

James Atwood
Eric Breck
Yoni Halpern
D. Sculley
Erica Greene
Peggy Chi
Anurag Batra
Contact: inclusive-images-nips@google.com

Webpage: https://sites.google.com/view/inclusiveimages/

 

The AI Driving Olympics

Competition summary: Machine Learning (ML), deep learning, and deep reinforcement learning have shown remarkable success on a variety of tasks in the very recent past. However, the ability of these methods to supersede classical approaches on physically embodied agents is still unclear. In particular, it remains to be seen whether learning-based approached can be completely trusted to control safety-critical systems such as self-driving cars. This live competition, presented by the Duckietown Foundation, is designed to explore which approaches work best for what tasks and subtasks in a complex robotic system. The participants will need to design algorithms that implement either part or all of the management and navigation required for a fleet of self-driving miniature taxis. There will be a set of different trials that correspond to progressively more sophisticated behaviors for the cars. These vary in complexity, from the reactive task of lane following to more complex and ccognitive behaviors, such as obstacle avoidance, point-to-point navigation, and finally coordinating a vehicle fleet while adhering to the entire set of the \u201crules of the road\u201d. We will provide baseline solutions for the tasks based on conventional autonomy architectures; the participants will be free to replace any or all of the components with custom learning-based solutions.The competition will be live at NIPS, but participants will not need to be physically present\u2014they will just need to send their source code packaged as a Docker image. There will be qualifying rounds in simulation and we will make available the use of \u201crobotariums,\u201d which are facilities that allow remote experimentation in a reproducible setting.

Organizers:

Andrea Censi, nuTonomy and ETH Z\xfcrich, acensi@idsc.mavt.ethz.ch
Liam Paull, Universitie de Montreal, paulll@iro.umontreal.ca
Jacopo Tani, ETH Z\xfcrich, tanij@ethz.ch
Scott Livingston, q@rerobots.net
Julian Zilly, ETH Z\xfcrich, jzilly@ethz.ch
Ruslan Hristov, nuTonomy, rusi@nutonomy.com
Oscar Beijbom, nuTonomy, oscar@nutonomy.com
Eryk Nice, nuTonomy, eryk.nice@nutonomy.com
Sunil Mallya, Amazon, smallya@amazon.com
Justin De Castri, Amazon, decastri@amazon.com
Hsueh-Cheng (Nick) Wang, National Chiao Tung University, hchengwang@gmail.com
Qing-Shan Jia, Tsinghua, jiaqs@tsinghua.edu.cn
Tao Zhang, Tsinghua , taozhang@tsinghua.edu.cn
Stefano Soatto, UCLA and Amazon, soattos@amazon.com
Magnus Egerstedt, Georgia Tech, magnus.egerstedt@ece.gatech.edu
Yoshua Bengio, Universit\xe9 de Montr\xe9al, yoshua.bengio@umontreal.ca
Emilio Frazzoli, ETH Z\xfcrich and nuTonomy, emilio.frazzoli@idsc.mavt.ethz.ch

Webpage: https://AI-DO.duckietown.org

 

AI for prosthetics

Competition summary: Recent advancements in material science and device technology have increased interest in creating prosthetics for improving human movement. Designing these devices, however, is difficult as it is costly and time-consuming to iterate through many designs. In this challenge, we explore using reinforcement learning techniques to train realistic, biomechanical models and approximate movement patterns of a patient with a prosthetic leg. Successful models will be key to better understand the human-prosthesis interaction, which will help to accelerate development of this field.

Organizers:

\u0141ukasz Kidzi\u0144ski, Stanford, lukasz.kidzinski@stanford.edu
Carmichael Ong, Stanford, ongcf@stanford.edu
Mohanty Sharada, EPFL, sharada.mohanty@epfl.ch
Jennifer Hicks, Stanford, jenhicks@stanford.edu
Joy Ku, Stanford, Stanford, joyku@stanford.edu
Sean Carroll, EPFL, sean.carroll@epfl.ch
Sergey Levine, UC Berkeley, svlevine@eecs.berkeley.edu
Marcel Salath\xe9, EPFL, marcel.salathe@epfl.ch
Scott Delp, Stanford, delp@stanford.edu

Webpage: https://www.crowdai.org/challenges/nips-2018-ai-for-prosthetics-challenge


'