Workshop
INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond
Yann Dauphin · David Lopez-Paz · Vikas Verma · Boyi Li
Room 393
Fri 2 Dec, 6:30 a.m. PST
Goals
Interpolation regularizers are an increasingly popular approach to regularize deep models. For example, the mixup data augmentation method constructs synthetic examples by linearly interpolating random pairs of training data points. During their half-decade lifespan, interpolation regularizers have become ubiquitous and fuel state-of-the-art results in virtually all domains, including computer vision and medical diagnosis. This workshop brings together researchers and users of interpolation regularizers to foster research and discussion to advance and understand interpolation regularizers. This inaugural meeting will have no shortage of interactions and energy to achieve these exciting goals. Suggested topics include, but are not limited to the intersection between interpolation regularizers and:
* Domain generalization
* Semi-supervised learning
* Privacy-preserving ML
* Theory
* Robustness
* Fairness
* Vision
* NLP
* Medical applications
## Important dates
* Paper submission deadline: September 22, 2022
* Paper acceptance notification: October 14, 2022
* Workshop: December 2, 2022
## Call for papers
Authors are invited to submit short papers with up to 4 pages, but unlimited number of pages for references and supplementary materials. The submissions must be anonymized as the reviewing process will be double-blind. Please use the NeurIPS template for submissions. We welcome submissions that have been already published during COVID in order to foster discussion. The venue of publication should be clearly indicated during submission for such papers. Submission Link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/INTERPOLATE
## Invited Speakers
Chelsea Finn, form Stanford, on "Repurposing Mixup for Robustness and Regression"
Sanjeev Arora, from Princeton, on "Using Interpolation Ideas to provide privacy in Federated Learning settings"
Kenji Kawaguchi, from NUS, on "The developments of the theory of Mixup"
Youssef Mroueh, from IBM, on "Fairness and mixing"
Alex Lamb, from MSR, on "What matters in the world? Exploring algorithms for provably ignoring irrelevant details"
Chat is not available.
Timezone: America/Los_Angeles
Schedule
Fri 6:30 a.m. - 6:45 a.m.
|
Opening Remarks
(
Remarks
)
>
SlidesLive Video |
🔗 |
Fri 6:45 a.m. - 7:30 a.m.
|
Youssef Mroueh on Interpolating for fairness
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Fri 7:30 a.m. - 8:15 a.m.
|
Sanjeev Arora on Using Interpolation to provide privacy in Federated Learning settings
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Fri 8:15 a.m. - 9:00 a.m.
|
Chelsea Finn on Repurposing Mixup for Robustness and Regression
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Fri 9:00 a.m. - 10:00 a.m.
|
Panel discussion I
(
Discussion Panel
)
>
SlidesLive Video |
🔗 |
Fri 10:30 a.m. - 12:00 p.m.
|
Lunch with random mixing group and organizers
|
🔗 |
Fri 12:00 p.m. - 12:45 p.m.
|
Kenji Kawaguchi on The developments of the theory of Mixup
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Fri 12:45 p.m. - 1:30 p.m.
|
Alex Lamb on Latent Data Augmentation for Improved Generalization
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Fri 1:30 p.m. - 2:15 p.m.
|
Gabriel Ilharco on Robust and accurate fine-tuning by interpolating weights
(
Invited Talk
)
>
SlidesLive Video |
🔗 |
Fri 2:15 p.m. - 3:00 p.m.
|
Panel II
(
Discussion Panel
)
>
SlidesLive Video |
🔗 |
Fri 3:00 p.m. - 3:45 p.m.
|
Poster Session
(
Posters
)
>
|
🔗 |
Fri 3:45 p.m. - 4:00 p.m.
|
Closing Remarks
(
Remarks
)
>
SlidesLive Video |
🔗 |
-
|
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models ( Poster ) > link | Margaret Li 🔗 |
-
|
On Data Augmentation and Consistency-based Semi-supervised Relation Extraction ( Poster ) > link | Komal Teru 🔗 |
-
|
Differentially Private CutMix for Split Learning with Vision Transformer ( Poster ) > link | Seungeun Oh · Jihong Park · Sihun Baek · Hyelin Nam · Praneeth Vepakomma · Ramesh Raskar · Mehdi Bennis · Seong-Lyun Kim 🔗 |
-
|
Improving Domain Generalization with Interpolation Robustness ( Poster ) > link | Ragja Palakkadavath · Thanh Nguyen-Tang · Sunil Gupta · Svetha Venkatesh 🔗 |
-
|
Pre-train, fine-tune, interpolate: a three-stage strategy for domain generalization ( Poster ) > link | Alexandre Rame · Jianyu Zhang · Leon Bottou · David Lopez-Paz 🔗 |
-
|
Sample Relationships through the Lens of Learning Dynamics with Label Information ( Poster ) > link | Shangmin Guo · Yi Ren · Stefano Albrecht · Kenny Smith 🔗 |
-
|
AlignMixup: Improving Representations By Interpolating Aligned Features ( Poster ) > link | Shashanka Venkataramanan · Ewa Kijak · laurent amsaleg · Yannis Avrithis 🔗 |
-
|
LSGANs with Gradient Regularizers are Smooth High-dimensional Interpolators ( Poster ) > link | Siddarth Asokan · Chandra Seelamantula 🔗 |
-
|
Over-Training with Mixup May Hurt Generalization ( Poster ) > link | Zixuan Liu · Ziqiao Wang · Hongyu Guo · Yongyi Mao 🔗 |
-
|
Covariate Shift Detection via Domain Interpolation Sensitivity ( Poster ) > link | Tejas Gokhale · Joshua Feinglass · 'YZ' Yezhou Yang 🔗 |
-
|
Interpolating Compressed Parameter Subspaces ( Poster ) > link | Siddhartha Datta · Nigel Shadbolt 🔗 |
-
|
Mixup for Robust Image Classification - Application in Continuously Transitioning Industrial Sprays ( Poster ) > link | Huanyi Shui · Hongjiang Li · devesh upadhyay · Praveen Narayanan · Alemayehu Solomon Admasu 🔗 |
-
|
Momentum-based Weight Interpolation of Strong Zero-Shot Models for Continual Learning ( Poster ) > link | Zafir Stojanovski · Karsten Roth · Zeynep Akata 🔗 |
-
|
Mixed Samples Data Augmentation with Replacing Latent Vector Components in Normalizing Flow ( Poster ) > link | Genki Osada · Budrul Ahsan · Takashi Nishide 🔗 |
-
|
Overparameterization Implicitly Regularizes Input-Space Smoothness ( Poster ) > link | Matteo Gamba · Hossein Azizpour · Mårten Björkman 🔗 |
-
|
Effect of mixup Training on Representation Learning ( Poster ) > link | Arslan Chaudhry · Aditya Menon · Andreas Veit · Sadeep Jayasumana · Srikumar Ramalingam · Sanjiv Kumar 🔗 |
-
|
FedLN: Federated Learning with Label Noise ( Poster ) > link | Vasileios Tsouvalas · Aaqib Saeed · Tanir Özçelebi · Nirvana Meratnia 🔗 |
-
|
Benefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint ( Poster ) > link | Hao Liu · Minshuo Chen · Siawpeng Er · Wenjing Liao · Tong Zhang · Tuo Zhao 🔗 |
-
|
GroupMixNorm Layer for Learning Fair Models ( Poster ) > link | Anubha Pandey · Aditi Rai · Maneet Singh · Deepak Bhatt · Tanmoy Bhowmik 🔗 |
-
|
SMILE: Sample-to-feature MIxup for Efficient Transfer LEarning ( Poster ) > link | Xingjian Li · Haoyi Xiong · Cheng-Zhong Xu · Dejing Dou 🔗 |
-
|
Contributed Spotlights
(
Oral
)
>
|
🔗 |