Show Detail |
Timezone: America/Los_Angeles |
Filter Rooms:
SUN 6 DEC
5 a.m.
6 a.m.
Expo Talk Panel:
(ends 7:00 AM)
7 a.m.
Expo Talk Panel:
(ends 8:00 AM)
8 a.m.
Expo Talk Panel:
(ends 9:00 AM)
9 a.m.
10 a.m.
11 a.m.
noon
Expo Demonstration:
(ends 1:00 PM)
1 p.m.
Expo Talk Panel:
(ends 2:00 PM)
2 p.m.
Expo Talk Panel:
(ends 3:00 PM)
3 p.m.
4 p.m.
5 p.m.
Expo Talk Panel:
(ends 6:00 PM)
6 p.m.
Expo Demonstration:
(ends 7:00 PM)
7 p.m.
Expo Demonstration:
(ends 8:00 PM)
8 p.m.
9 p.m.
Expo Demonstration:
(ends 10:00 PM)
MON 7 DEC
midnight
2:30 a.m.
3 a.m.
5:30 a.m.
Tutorial:
(ends 8:00 AM)
6 a.m.
8 a.m.
Tutorial:
(ends 10:30 AM)
Tutorial:
(ends 10:30 AM)
11 a.m.
Tutorial:
(ends 1:30 PM)
12:30 p.m.
1:30 p.m.
Tutorial:
(ends 4:00 PM)
Tutorial:
(ends 4:00 PM)
5 p.m.
Invited Talk:
Charles Isbell
(ends 7:00 PM)
6 p.m.
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Learning Physical Graph Representations from Visual Scenes
[6:15]
Multi-label Contrastive Predictive Coding
[6:30]
Equivariant Networks for Hierarchical Structures
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
On the Equivalence between Online and Private Learnability beyond Binary Classification
[7:10]
Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings
[7:20]
Joint Contrastive Learning with Infinite Possibilities
[7:30]
Neural Methods for Point-wise Dependency Estimation
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Design Space for Graph Neural Networks
[8:00]
Debiased Contrastive Learning
[8:10]
The Autoencoding Variational Autoencoder
[8:20]
Unsupervised Representation Learning by Invariance Propagation
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
When and How to Lift the Lockdown? Global COVID-19 Scenario Analysis and Policy Assessment using Compartmental Gaussian Processes
[6:15]
Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement
[6:30]
Neural encoding with visual attention
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations
[7:10]
Using noise to probe recurrent neural network structure and prune synapses
[7:20]
Interpretable Sequence Learning for Covid-19 Forecasting
[7:30]
Kalman Filtering Attention for User Behavior Modeling in CTR Prediction
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Demixed shared component analysis of neural population data from multiple brain areas
[8:00]
Minimax Optimal Nonparametric Estimation of Heterogeneous Treatment Effects
[8:10]
The Devil is in the Detail: A Framework for Macroscopic Prediction via Microscopic Models
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Language Models are Few-Shot Learners
[6:15]
Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search
[6:30]
The Cone of Silence: Speech Separation by Localization
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Unsupervised Sound Separation Using Mixture Invariant Training
[7:10]
Investigating Gender Bias in Language Models Using Causal Mediation Analysis
[7:20]
A Simple Language Model for Task-Oriented Dialogue
[7:30]
ConvBERT: Improving BERT with Span-based Dynamic Convolution
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Cross-lingual Retrieval for Iterative Self-Supervised Training
[8:00]
DynaBERT: Dynamic BERT with Adaptive Width and Depth
[8:10]
Incorporating Pragmatic Reasoning Communication into Emergent Language
[8:20]
De-Anonymizing Text by Fingerprinting Language Generation
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
An Efficient Asynchronous Method for Integrating Evolutionary and Gradient-based Policy Search
[6:15]
Novelty Search in Representational Space for Sample Efficient Exploration
[6:30]
Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
First Order Constrained Optimization in Policy Space
[7:10]
CoinDICE: Off-Policy Confidence Interval Estimation
[7:20]
DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction
[7:30]
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning
[8:00]
Bayesian Multi-type Mean Field Multi-agent Imitation Learning
[8:10]
Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal Sample Complexity
[8:20]
Safe Reinforcement Learning via Curriculum Induction
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
9 p.m.
(ends 11:00 PM)
TUE 8 DEC
2 a.m.
3 a.m.
5 a.m.
6 a.m.
Tue demos repeat on Wed
Demonstration
s
6:00-9:20
(duration 3.3 hr)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Exact Recovery of Mangled Clusters with Same-Cluster Queries
[6:15]
Deep Transformation-Invariant Clustering
[6:30]
Partially View-aligned Clustering
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Simple and Scalable Sparse k-means Clustering via Feature Ranking
[7:10]
Simultaneous Preference and Metric Learning from Paired Comparisons
[7:20]
Learning Optimal Representations with the Decodable Information Bottleneck
[7:30]
Manifold structure in graph embeddings
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Self-Supervised Learning by Cross-Modal Audio-Video Clustering
[8:00]
Classification with Valid and Adaptive Coverage
[8:10]
On ranking via sorting by estimated expected utility
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Deep Energy-based Modeling of Discrete-Time Physics
[6:15]
SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term Memory
[6:30]
Dissecting Neural ODEs
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Robust Density Estimation under Besov IPM Losses
[7:10]
Almost Surely Stable Deep Dynamics
[7:20]
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
[7:30]
A Theoretical Framework for Target Propagation
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Training Generative Adversarial Networks by Solving Ordinary Differential Equations
[8:00]
Information theoretic limits of learning a sparse rule
[8:10]
Constant-Expansion Suffices for Compressed Sensing with Generative Priors
[8:20]
Logarithmic Pruning is All You Need
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring
[6:15]
Causal Intervention for Weakly-Supervised Semantic Segmentation
[6:30]
Convolutional Generation of Textured 3D Meshes
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
DISK: Learning local features with policy gradient
[7:10]
Wasserstein Distances for Stereo Disparity Estimation
[7:20]
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
[7:30]
Learning Semantic-aware Normalization for Generative Adversarial Networks
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Neural Sparse Voxel Fields
[8:00]
3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data
[8:10]
Learning to Detect Objects with a 1 Megapixel Event Camera
[8:20]
A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Multiscale Deep Equilibrium Models
[6:15]
On the Modularity of Hypernetworks
[6:30]
Training Generative Adversarial Networks with Limited Data
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
MeshSDF: Differentiable Iso-Surface Extraction
[7:10]
GAIT-prop: A biologically plausible learning rule derived from backpropagation of error
[7:20]
Monotone operator equilibrium networks
[7:30]
What Do Neural Networks Learn When Trained With Random Labels?
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks
[8:00]
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks
[8:10]
The phase diagram of approximation rates for deep neural networks
[8:20]
Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Improved Sample Complexity for Incremental Autonomous Exploration in MDPs
[6:15]
Escaping the Gravitational Pull of Softmax
[6:30]
FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Interferobot: aligning an optical interferometer by a reinforcement learning agent
[7:10]
On Efficiency in Hierarchical Reinforcement Learning
[7:20]
Finite-Time Analysis for Double Q-learning
[7:30]
Towards Minimax Optimal Reinforcement Learning in Factored Markov Decision Processes
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
[8:00]
Model-based Policy Optimization with Unsupervised Model Adaptation
[8:10]
Variational Policy Gradient Method for Reinforcement Learning with General Utilities
[8:20]
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Adversarially Robust Streaming Algorithms via Differential Privacy
[6:15]
Differentially Private Clustering: Tight Approximation Ratios
[6:30]
Locally private non-asymptotic testing of discrete distributions is faster using interactive mechanisms
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Multi-Robot Collision Avoidance under Uncertainty with Probabilistic Safety Barrier Certificates
[7:10]
Private Identity Testing for High-Dimensional Distributions
[7:20]
Permute-and-Flip: A new mechanism for differentially private selection
[7:30]
Smoothed Analysis of Online and Differentially Private Learning
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Optimal Private Median Estimation under Minimal Distributional Assumptions
[8:00]
Assisted Learning: A Framework for Multi-Organization Learning
[8:10]
Higher-Order Certification For Randomized Smoothing
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium
[6:15]
Efficient active learning of sparse halfspaces with arbitrary bounded noise
[6:30]
Learning Parities with Neural Networks
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
The Adaptive Complexity of Maximizing a Gross Substitutes Valuation
[7:10]
Hitting the High Notes: Subset Selection for Maximizing Expected Order Statistics
[7:20]
A Bandit Learning Algorithm and Applications to Auction Design
[7:30]
An Optimal Elimination Algorithm for Learning a Best Arm
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
Second Order PAC-Bayesian Bounds for the Weighted Majority Vote
[8:00]
PAC-Bayesian Bound for the Conditional Value at Risk
[8:10]
Classification Under Misspecification: Halfspaces, Generalized Linear Models, and Evolvability
[8:20]
Hedging in games: Faster convergence of external and swap regrets
[8:30]
Online Bayesian Persuasion
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 AM)
7 a.m.
7:30 a.m.
9 a.m.
noon
Tutorial:
(ends 12:50 PM)
1 p.m.
2 p.m.
Tutorial:
(ends 2:50 PM)
5 p.m.
Invited Talk:
Shafi Goldwasser
(ends 7:00 PM)
6 p.m.
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Space-Time Correspondence as a Contrastive Random Walk
[6:15]
Rethinking Pre-training and Self-training
[6:30]
Do Adversarially Robust ImageNet Models Transfer Better?
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Self-Supervised Visual Representation Learning from Hierarchical Grouping
[7:10]
Learning Affordance Landscapes for Interaction Exploration in 3D Environments
[7:20]
Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D
[7:30]
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Measuring Robustness to Natural Distribution Shifts in Image Classification
[8:00]
Curriculum By Smoothing
[8:10]
Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer Proxies
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 PM)
Tue demos repeat on Wed
Demonstration
s
6:00-9:20
(duration 3.3 hr)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Implicit Neural Representations with Periodic Activation Functions
[6:15]
Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation
[6:30]
Coupling-based Invertible Neural Networks Are Universal Diffeomorphism Approximators
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
[7:10]
Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization
[7:20]
Compositional Visual Generation with Energy Based Models
[7:30]
Certified Monotonic Neural Networks
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing
[8:00]
On Correctness of Automatic Differentiation for Non-Differentiable Functions
[8:10]
The Complete Lasso Tradeoff Diagram
[8:20]
Quantifying the Empirical Wasserstein Distance to a Set of Measures: Beating the Curse of Dimensionality
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
[6:15]
Learning Individually Inferred Communication for Multi-Agent Cooperation
[6:30]
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Reinforcement Learning with Augmented Data
[7:10]
Sub-sampling for Efficient Non-Parametric Bandit Exploration
[7:20]
Language-Conditioned Imitation Learning for Robot Manipulation Tasks
[7:30]
High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Policy Improvement via Imitation of Multiple Oracles
[8:00]
Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning
[8:10]
Avoiding Side Effects in Complex Environments
[8:20]
Preference-based Reinforcement Learning with Finite-Time Guarantees
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
9 p.m.
(ends 11:00 PM)
WED 9 DEC
1 a.m.
1:40 a.m.
3 a.m.
5 a.m.
6 a.m.
Tue demos repeat on Wed
Demonstration
s
6:00-9:20
(duration 3.3 hr)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
High-Fidelity Generative Image Compression
[6:15]
Learning Composable Energy Surrogates for PDE Order Reduction
[6:30]
Hierarchically Organized Latent Modules for Exploratory Search in Morphogenetic Systems
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Compositional Generalization by Learning Analytical Expressions
[7:10]
Modern Hopfield Networks and Attention for Immune Repertoire Classification
[7:20]
ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA
[7:30]
A causal view of compositional zero-shot recognition
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
RetroXpert: Decompose Retrosynthesis Prediction Like A Chemist
[8:00]
Barking up the right tree: an approach to search over molecule synthesis DAGs
[8:10]
Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views
[8:20]
Experimental design for MRI by greedy policy search
[8:30]
How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19?
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Continual Deep Learning by Functional Regularisation of Memorable Past
[6:15]
Look-ahead Meta Learning for Continual Learning
[6:30]
NeuMiss networks: differentiable programming for supervised learning with missing values.
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Meta-trained agents implement Bayes-optimal agents
[7:10]
Linear Dynamical Systems as a Core Computational Primitive
[7:20]
Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels
[7:30]
Uncertainty-aware Self-training for Few-shot Text Classification
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
HiPPO: Recurrent Memory with Optimal Polynomial Projections
[8:00]
Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity
[8:10]
Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge
[8:20]
Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and Reasoning
[8:30]
Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces
[6:15]
Kernel Methods Through the Roof: Handling Billions of Points Efficiently
[6:30]
A Group-Theoretic Framework for Data Augmentation
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
A mathematical model for automatic differentiation in machine learning
[7:10]
A kernel test for quasi-independence
[7:20]
Fourier Sparse Leverage Scores and Approximate Kernel Learning
[7:30]
BOSS: Bayesian Optimization over String Spaces
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Fast geometric learning with symbolic matrices
[8:00]
Training Stronger Baselines for Learning to Optimize
[8:10]
Learning Linear Programs from Optimal Decisions
[8:20]
Automatically Learning Compact Quality-aware Surrogates for Optimization Problems
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Ultra-Low Precision 4-bit Training of Deep Neural Networks
[6:15]
Reservoir Computing meets Recurrent Kernels and Structured Transforms
[6:30]
The interplay between randomness and structure during learning in RNNs
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
What if Neural Networks had SVDs?
[7:10]
Practical Quasi-Newton Methods for Training Deep Neural Networks
[7:20]
Triple descent and the two kinds of overfitting: where & why do they appear?
[7:30]
On the linearity of large non-linear models: when and why the tangent kernel is constant
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Implicit Bias in Deep Linear Classification: Initialization Scale vs Training Accuracy
[8:00]
Proximal Mapping for Deep Regularization
[8:10]
BoxE: A Box Embedding Model for Knowledge Base Completion
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Network-to-Network Translation with Conditional Invertible Neural Networks
[6:15]
Causal Imitation Learning With Unobserved Confounders
[6:30]
Gradient Estimation with Stochastic Softmax Tricks
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Generalized Independent Noise Condition for Estimating Latent Variable Causal Graphs
[7:10]
A Randomized Algorithm to Reduce the Support of Discrete Measures
[7:20]
A/B Testing in Dense Large-Scale Networks: Design and Inference
[7:30]
DisARM: An Antithetic Gradient Estimator for Binary Latent Variables
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
Off-Policy Evaluation and Learning for External Validity under a Covariate Shift
[8:00]
Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding
[8:10]
Differentiable Causal Discovery from Interventional Data
[8:20]
Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks
[8:30]
Efficient semidefinite-programming-based inference for binary and multi-class MRFs
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles
[6:15]
Metric-Free Individual Fairness in Online Learning
[6:30]
Fair regression via plug-in estimator and recalibration with statistical guarantees
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
[7:10]
Differentially-Private Federated Linear Bandits
[7:20]
Adversarial Training is a Form of Data-dependent Operator Norm Regularization
[7:30]
Prediction with Corrupted Expert Advice
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses
[8:00]
Towards Safe Policy Improvement for Non-Stationary MDPs
[8:10]
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
[8:20]
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach
[8:30]
Understanding Gradient Clipping in Private SGD: A Geometric Perspective
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-7:00
[6:00]
Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
[6:15]
Entropic Optimal Transport between Unbalanced Gaussian Measures has a Closed Form
[6:30]
Acceleration with a Ball Optimization Oracle
[6:45]
Convex optimization based on global lower second-order models
Spotlight
s
7:00-7:40
[7:00]
Adam with Bandit Sampling for Deep Learning
[7:10]
Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling
[7:20]
IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method
[7:30]
Revisiting Frank-Wolfe for Polytopes: Strict Complementarity and Sparsity
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
Minibatch Stochastic Approximate Proximal Point Methods
[8:00]
Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems
[8:10]
Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms
[8:20]
Linearly Converging Error Compensated SGD
[8:30]
Learning Augmented Energy Minimization via Speed Scaling
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 AM)
9 a.m.
11 a.m.
noon
Tutorial:
(ends 12:50 PM)
2 p.m.
5 p.m.
6 p.m.
Tue demos repeat on Wed
Demonstration
s
6:00-9:20
(duration 3.3 hr)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence
[6:15]
LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration
[6:30]
The Origins and Prevalence of Texture Bias in Convolutional Neural Networks
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Distribution Matching for Crowd Counting
[7:10]
Texture Interpolation for Probing Visual Perception
[7:20]
Consistent Structural Relation Learning for Zero-Shot Segmentation
[7:30]
CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
ShapeFlow: Learnable Deformation Flows Among 3D Shapes
[8:00]
Neural Mesh Flow: 3D Manifold Mesh Generation via Diffeomorphic Flows
[8:10]
Counterfactual Vision-and-Language Navigation: Unravelling the Unseen
[8:20]
RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
FrugalML: How to use ML Prediction APIs more accurately and cheaply
[6:15]
AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity
[6:30]
PyGlove: Symbolic Programming for Automated Machine Learning
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Improved Schemes for Episodic Memory-based Lifelong Learning
[7:10]
Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting
[7:20]
Uncertainty Aware Semi-Supervised Learning on Graph Data
[7:30]
Rethinking Importance Weighting for Deep Learning under Distribution Shift
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Modular Meta-Learning with Shrinkage
[8:00]
JAX MD: A Framework for Differentiable Physics
[8:10]
RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method
[6:15]
Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs
[6:30]
Worst-Case Analysis for Randomly Collected Data
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
On Adaptive Distance Estimation
[7:10]
Tight First- and Second-Order Regret Bounds for Adversarial Linear Bandits
[7:20]
Delay and Cooperation in Nonstochastic Linear Bandits
[7:30]
Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition
[8:00]
A Tight Lower Bound and Efficient Reduction for Swap Regret
[8:10]
Estimation of Skill Distribution from a Tournament
[8:20]
Optimal Prediction of the Number of Unseen Species with Multiplicity
[8:30]
Estimating Rank-One Spikes from Heavy-Tailed Noise via Self-Avoiding Walks
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 PM)
THU 10 DEC
midnight
2 a.m.
3 a.m.
Tutorial:
(ends 3:50 AM)
4 a.m.
5 a.m.
6 a.m.
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification
[6:15]
Fast and Flexible Temporal Point Processes with Triangular Maps
[6:30]
Greedy inference with structure-exploiting lazy maps
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Sampling from a k-DPP without looking at all items
[7:10]
Non-parametric Models for Non-negative Functions
[7:20]
Distribution-free binary classification: prediction sets, confidence intervals and calibration
[7:30]
Factor Graph Grammars
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:40
[7:50]
Asymptotically Optimal Exact Minibatch Metropolis-Hastings
[8:00]
Bayes Consistency vs. H-Consistency: The Interplay between Surrogate Loss Functions and the Scoring Function Class
[8:10]
Confidence sequences for sampling without replacement
[8:20]
Statistical and Topological Properties of Sliced Probability Divergences
[8:30]
Testing Determinantal Point Processes
Q&A
s
8:40-8:50
[8:40]
Joint Q&A for Preceeding Spotlights
Break
s
8:50-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Graph Cross Networks with Vertex Infomax Pooling
[6:15]
Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
[6:30]
Graph Random Neural Networks for Semi-Supervised Learning on Graphs
Break
s
6:45-7:00
Spotlight
s
7:00-7:20
[7:00]
Learning Graph Structure With A Finite-State Automaton Layer
[7:10]
Pointer Graph Networks
Q&A
s
7:20-7:30
[7:20]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:30-7:40
[7:30]
Certified Robustness of Graph Convolution Networks for Graph Classification under Topological Attacks
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Convergence and Stability of Graph Convolutional Networks on Large Random Graphs
[8:00]
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
[8:10]
Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations
[8:20]
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Contrastive learning of global and local features for medical image segmentation with limited annotations
[6:15]
Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning
[6:30]
SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Self-Supervised Relational Reasoning for Representation Learning
[7:10]
Object-Centric Learning with Slot Attention
[7:20]
Telescoping Density-Ratio Estimation
[7:30]
Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks
[8:00]
Stochastic Normalizing Flows
[8:10]
Generative Neurosymbolic Machines
[8:20]
DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
A shooting formulation of deep learning
[6:15]
On the training dynamics of deep networks with $L_2$ regularization
[6:30]
Compositional Explanations of Neurons
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
[7:10]
On Power Laws in Deep Ensembles
[7:20]
Learning the Geometry of Wave-Based Imaging
[7:30]
The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Sparse and Continuous Attention Mechanisms
[8:00]
Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks
[8:10]
Directional convergence and alignment in deep learning
[8:20]
Neural Controlled Differential Equations for Irregular Time Series
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Learning abstract structure for drawing by efficient motor program induction
[6:15]
Non-reversible Gaussian processes for identifying latent dynamical structure in neural data
[6:30]
Gibbs Sampling with People
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Stable and expressive recurrent vision models
[7:10]
Identifying Learning Rules From Neural Network Observables
[7:20]
A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons
[7:30]
Modeling Shared responses in Neuroimaging Studies through MultiView ICA
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Patch2Self: Denoising Diffusion MRI with Self-Supervised Learning
[8:00]
Uncovering the Topology of Time-Varying fMRI Data using Cubical Persistence
[8:10]
System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina
[8:20]
A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
[6:15]
Towards a Better Global Loss Landscape of GANs
[6:30]
Online Sinkhorn: Optimal Transport distances from sample streams
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses
[7:10]
Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions
[7:20]
Conformal Symplectic and Relativistic Optimization
[7:30]
Random Reshuffling is Not Always Better
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
The Statistical Complexity of Early-Stopped Mirror Descent
[8:00]
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
[8:10]
Towards Problem-dependent Optimal Learning Rates
[8:20]
On Uniform Convergence and Low-Norm Interpolation Learning
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs
[6:15]
Self-Paced Deep Reinforcement Learning
[6:30]
Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Bandit Linear Control
[7:10]
Neural Dynamic Policies for End-to-End Sensorimotor Learning
[7:20]
Effective Diversity in Population Based Reinforcement Learning
[7:30]
Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Reward Propagation Using Graph Convolutional Networks
[8:00]
On the Convergence of Smooth Regularized Approximate Value Iteration Schemes
[8:10]
Latent World Models For Intrinsically Motivated Exploration
[8:20]
Learning to Play No-Press Diplomacy with Best Response Policy Iteration
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 AM)
9 a.m.
Analytical Probability Distributions and Exact Expectation-Maximization for Deep Generative Networks
Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification
Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
(ends 11:00 AM)
noon
1 p.m.
4 p.m.
5 p.m.
6 p.m.
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization
[6:15]
The Primal-Dual method for Learning Augmented Algorithms
[6:30]
Fully Dynamic Algorithm for Constrained Submodular Optimization
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Submodular Maximization Through Barrier Functions
[7:10]
Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method
[7:20]
A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints
[7:30]
How many samples is a good initial point worth in Low-rank Matrix Recovery?
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Projection Robust Wasserstein Distance and Riemannian Optimization
[8:00]
A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval
[8:10]
SGD with shuffling: optimal rates without component convexity and large epoch requirements
[8:20]
No-Regret Learning and Mixed Nash Equilibria: They Do Not Mix
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Theory-Inspired Path-Regularized Differential Network Architecture Search
[6:15]
Improved Variational Bayesian Phylogenetic Inference with Normalizing Flows
[6:30]
Transferable Graph Optimizers for ML Compilers
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
A Study on Encodings for Neural Architecture Search
[7:10]
Interstellar: Searching Recurrent Architecture for Knowledge Graph Embedding
[7:20]
Evolving Normalization-Activation Layers
[7:30]
Open Graph Benchmark: Datasets for Machine Learning on Graphs
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning
[8:00]
MCUNet: Tiny Deep Learning on IoT Devices
[8:10]
Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Is normalization indispensable for training deep neural network?
[6:15]
Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks
[6:30]
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics
[7:10]
Kernel Based Progressive Distillation for Adder Neural Networks
[7:20]
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
[7:30]
Collegial Ensembles
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:30
[7:50]
Finite Versus Infinite Neural Networks: an Empirical Study
[8:00]
Estimating Training Data Influence by Tracing Gradient Descent
[8:10]
AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients
[8:20]
Part-dependent Label Noise: Towards Instance-dependent Label Noise
Q&A
s
8:30-8:40
[8:30]
Joint Q&A for Preceeding Spotlights
Break
s
8:40-9:00
(ends 9:00 PM)
Each Oral includes Q&A
Spotlights have joint Q&As
Spotlights have joint Q&As
Oral
s
6:00-6:45
[6:00]
Point process models for sequence detection in high-dimensional neural spike trains
[6:15]
Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN
[6:30]
A mathematical theory of cooperative communication
Break
s
6:45-7:00
Spotlight
s
7:00-7:40
[7:00]
Learning Some Popular Gaussian Graphical Models without Condition Number Bounds
[7:10]
Sinkhorn Natural Gradient for Generative Models
[7:20]
NVAE: A Deep Hierarchical Variational Autoencoder
[7:30]
Reciprocal Adversarial Learning via Characteristic Functions
Q&A
s
7:40-7:50
[7:40]
Joint Q&A for Preceeding Spotlights
Spotlight
s
7:50-8:20
[7:50]
Incorporating Interpretable Output Constraints in Bayesian Neural Networks
[8:00]
Baxter Permutation Process
[8:10]
Flexible mean field variational inference using mixtures of non-overlapping exponential families
Q&A
s
8:20-8:30
[8:20]
Joint Q&A for Preceeding Spotlights
Break
s
8:30-9:00
(ends 9:00 PM)
9 p.m.
(ends 11:00 PM)
FRI 11 DEC
midnight
Workshop
s
11:00-4:00
(ends 2:59 PM)
4 p.m.
Workshop
s
1:00-6:30
(ends 3:59 PM)