Show Detail |
Timezone: America/Los_Angeles |
Filter Rooms:
SUN 5 DEC
4 p.m.
MON 6 DEC
1 a.m.
Tutorial:
(ends 4:30 AM)
5 a.m.
Tutorial:
(ends 8:00 AM)
9 a.m.
1 p.m.
5 p.m.
9 p.m.
11 p.m.
TUE 7 DEC
midnight
Oral
s
12:00-12:15
[12:00]
MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers
Q&A
s
12:15-12:20
[12:15]
Q&A
Oral
s
12:20-12:35
[12:20]
Learning to Draw: Emergent Communication through Sketching
Q&A
s
12:35-12:40
[12:35]
Q&A
Oral
s
12:40-12:55
[12:40]
Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons
Q&A
s
12:55-1:00
[12:55]
Q&A
(ends 1:00 AM)
Oral
s
12:00-12:15
[12:00]
Framing RNN as a kernel method: A neural ODE approach
Q&A
s
12:15-12:20
[12:15]
Q&A
Oral
s
12:20-12:35
[12:20]
A Universal Law of Robustness via Isoperimetry
Q&A
s
12:35-12:40
[12:35]
Q&A
Oral
s
12:40-12:55
[12:40]
Causal Identification with Matrix Equations
Q&A
s
12:55-1:00
[12:55]
Q&A
(ends 1:00 AM)
Oral
s
12:00-12:15
[12:00]
E(n) Equivariant Normalizing Flows
Q&A
s
12:15-12:20
[12:15]
Q&A
Oral
s
12:20-12:35
[12:20]
Online Variational Filtering and Parameter Learning
Q&A
s
12:35-12:40
[12:35]
Q&A
Oral
s
12:40-12:55
[12:40]
Alias-Free Generative Adversarial Networks
Q&A
s
12:55-1:00
[12:55]
Q&A
(ends 1:00 AM)
Oral
s
12:00-12:15
[12:00]
Separation Results between Fixed-Kernel and Feature-Learning Probability Metrics
Q&A
s
12:15-12:20
[12:15]
Q&A
Oral
s
12:20-12:35
[12:20]
Near-Optimal No-Regret Learning in General Games
Q&A
s
12:35-12:40
[12:35]
Q&A
Oral
s
12:40-12:55
[12:40]
Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions
Q&A
s
12:55-1:00
[12:55]
Q&A
(ends 1:00 AM)
1 a.m.
Oral
s
1:00-1:15
[1:00]
Attention over Learned Object Embeddings Enables Complex Visual Reasoning
Q&A
s
1:15-1:20
[1:15]
Q&A
Oral
s
1:20-1:35
[1:20]
Learning Frequency Domain Approximation for Binary Neural Networks
Q&A
s
1:35-1:40
[1:35]
Q&A
Oral
s
1:40-1:55
[1:40]
Learning Debiased Representation via Disentangled Feature Augmentation
Q&A
s
1:55-2:00
[1:55]
Q&A
(ends 2:00 AM)
Oral
s
1:00-1:15
[1:00]
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
Q&A
s
1:15-1:20
[1:15]
Q&A
Oral
s
1:20-1:35
[1:20]
Differentiable Quality Diversity
Q&A
s
1:35-1:40
[1:35]
Q&A
Oral
s
1:40-1:55
[1:40]
Hessian Eigenspectra of More Realistic Nonlinear Models
Q&A
s
1:55-2:00
[1:55]
Q&A
(ends 2:00 AM)
Oral
s
1:00-1:15
[1:00]
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
Q&A
s
1:15-1:20
[1:15]
Q&A
Oral
s
1:20-1:35
[1:20]
On the Expressivity of Markov Reward
Q&A
s
1:35-1:40
[1:35]
Q&A
Oral
s
1:40-1:55
[1:40]
The best of both worlds: stochastic and adversarial episodic MDPs with unknown transition
Q&A
s
1:55-2:00
[1:55]
Q&A
(ends 2:00 AM)
Oral
s
1:00-1:15
[1:00]
Data driven semi-supervised learning
Q&A
s
1:15-1:20
[1:15]
Q&A
Oral
s
1:20-1:35
[1:20]
Stability and Deviation Optimal Risk Bounds with Convergence Rate $O(1/n)$
Q&A
s
1:35-1:40
[1:35]
Q&A
Oral
s
1:40-1:55
[1:40]
The Complexity of Bayesian Network Learning: Revisiting the Superstructure
Q&A
s
1:55-2:00
[1:55]
Q&A
(ends 2:00 AM)
5 a.m.
6:02 a.m.
7 a.m.
8:30 a.m.
An Efficient Pessimistic-Optimistic Algorithm for Stochastic Linear Bandits with General Constraints
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation
(ends 10:00 AM)
10 a.m.
11 a.m.
3 p.m.
Invited Talk:
Mary L. Gray
(ends 4:30 PM)
4:30 p.m.
(ends 6:00 PM)
6 p.m.
8 p.m.
11 p.m.
WED 8 DEC
midnight
12:30 a.m.
2 a.m.
3 a.m.
7 a.m.
8 a.m.
Oral
s
8:00-8:15
[8:00]
Unsupervised Speech Recognition
Q&A
s
8:15-8:20
[8:15]
Q&A
Oral
s
8:20-8:35
[8:20]
Deep Reinforcement Learning at the Edge of the Statistical Precipice
Q&A
s
8:35-8:40
[8:35]
Q&A
Oral
s
8:40-8:55
[8:40]
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
Q&A
s
8:55-9:00
[8:55]
Q&A
(ends 9:00 AM)
Oral
s
8:00-8:15
[8:00]
Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms
Q&A
s
8:15-8:20
[8:15]
Q&A
Oral
s
8:20-8:35
[8:20]
Oracle Complexity in Nonsmooth Nonconvex Optimization
Q&A
s
8:35-8:40
[8:35]
Q&A
Oral
s
8:40-8:55
[8:40]
Faster Matchings via Learned Duals
Q&A
s
8:55-9:00
[8:55]
Q&A
(ends 9:00 AM)
Oral
s
8:00-8:15
[8:00]
Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination
Q&A
s
8:15-8:20
[8:15]
Q&A
Oral
s
8:20-8:35
[8:20]
Bellman-consistent Pessimism for Offline Reinforcement Learning
Q&A
s
8:35-8:40
[8:35]
Q&A
Oral
s
8:40-8:55
[8:40]
A Compositional Atlas of Tractable Circuit Operations for Probabilistic Inference
Q&A
s
8:55-9:00
[8:55]
Q&A
(ends 9:00 AM)
Oral
s
8:00-8:15
[8:00]
Partial success in closing the gap between human and machine vision
Q&A
s
8:15-8:20
[8:15]
Q&A
Oral
s
8:20-8:35
[8:20]
Keeping Your Eye on the Ball: Trajectory Attention in Video Transformers
Q&A
s
8:35-8:40
[8:35]
Q&A
Oral
s
8:40-8:55
[8:40]
Volume Rendering of Neural Implicit Surfaces
Q&A
s
8:55-9:00
[8:55]
Q&A
(ends 9:00 AM)
8:30 a.m.
9 a.m.
10 a.m.
10:30 a.m.
11 a.m.
noon
3 p.m.
4:30 p.m.
(ends 6:00 PM)
6 p.m.
6:30 p.m.
11 p.m.
THU 9 DEC
12:30 a.m.
5 a.m.
7 a.m.
Invited Talk (Interview):
Daniel Kahneman
(ends 8:30 AM)
8:30 a.m.
10 a.m.
11 a.m.
3 p.m.
4:30 p.m.
6 p.m.
11 p.m.
FRI 10 DEC
midnight
Oral
s
12:00-12:15
[12:00]
Risk Monotonicity in Statistical Learning
Q&A
s
12:15-12:20
[12:15]
Q&A
Oral
s
12:20-12:35
[12:20]
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting
Q&A
s
12:35-12:40
[12:35]
Q&A
Oral
s
12:40-12:55
[12:40]
The decomposition of the higher-order homology embedding constructed from the $k$-Laplacian
Q&A
s
12:55-1:00
[12:55]
Q&A
(ends 1:00 AM)
Oral
s
12:00-12:15
[12:00]
Passive attention in artificial neural networks predicts human visual selectivity
Q&A
s
12:15-12:20
[12:15]
Q&A
Oral
s
12:20-12:35
[12:20]
Shape As Points: A Differentiable Poisson Solver
Q&A
s
12:35-12:40
[12:35]
Q&A
Oral
s
12:40-12:55
[12:40]
Optimal Rates for Random Order Online Optimization
Q&A
s
12:55-1:00
[12:55]
Q&A
(ends 1:00 AM)
2 a.m.
7 a.m.
Invited Talk:
Radhika Nagpal
(ends 8:30 AM)
8:30 a.m.
11 a.m.
noon
3 p.m.
4 p.m.
Oral
s
4:00-4:15
[4:00]
Moser Flow: Divergence-based Generative Modeling on Manifolds
Q&A
s
4:15-4:20
[4:15]
Q&A
Oral
s
4:20-4:35
[4:20]
Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity
Q&A
s
4:35-4:40
[4:35]
Q&A
Oral
s
4:40-4:55
[4:40]
Learning Treatment Effects in Panels with General Intervention Patterns
Q&A
s
4:55-5:00
[4:55]
Q&A
(ends 5:00 PM)
Oral
s
4:00-4:15
[4:00]
MERLOT: Multimodal Neural Script Knowledge Models
Q&A
s
4:15-4:20
[4:15]
Q&A
Oral
s
4:20-4:35
[4:20]
High-probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails
Q&A
s
4:35-4:40
[4:35]
Q&A
Oral
s
4:40-4:55
[4:40]
Adaptive Conformal Inference Under Distribution Shift
Q&A
s
4:55-5:00
[4:55]
Q&A
(ends 5:00 PM)
Oral
s
4:00-4:15
[4:00]
Interesting Object, Curious Agent: Learning Task-Agnostic Exploration
Q&A
s
4:15-4:20
[4:15]
Q&A
Oral
s
4:20-4:35
[4:20]
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Q&A
s
4:35-4:40
[4:35]
Q&A
Oral
s
4:40-4:55
[4:40]
Sequential Causal Imitation Learning with Unobserved Confounders
Q&A
s
4:55-5:00
[4:55]
Q&A
(ends 5:00 PM)
Oral
s
4:00-4:15
[4:00]
Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Q&A
s
4:15-4:20
[4:15]
Q&A
Oral
s
4:20-4:35
[4:20]
Retiring Adult: New Datasets for Fair Machine Learning
Q&A
s
4:35-4:40
[4:35]
Q&A
(ends 5:00 PM)
Oral
s
4:00-4:15
[4:00]
DROID-SLAM: Deep Visual SLAM for Monocular, Stereo, and RGB-D Cameras
Q&A
s
4:15-4:20
[4:15]
Q&A
Oral
s
4:20-4:35
[4:20]
Learning with Noisy Correspondence for Cross-modal Matching
Q&A
s
4:35-4:40
[4:35]
Q&A
(ends 5:00 PM)
5 p.m.
MON 13 DEC
1 a.m.
3 a.m.
3:15 a.m.
4 a.m.
Workshop:
(ends 11:10 AM)
4:50 a.m.
Workshop:
(ends 2:40 PM)
5 a.m.
Workshop:
(ends 5:00 PM)
5:30 a.m.
5:45 a.m.
5:50 a.m.
5:55 a.m.
6 a.m.
6:45 a.m.
7 a.m.
7:50 a.m.
7:55 a.m.
8 a.m.
8:15 a.m.
8:50 a.m.
8:55 a.m.
9 a.m.
TUE 14 DEC
1:20 a.m.
3 a.m.
3:45 a.m.
4 a.m.
4:50 a.m.
4:55 a.m.
Workshop:
(ends 4:30 PM)
5 a.m.
Workshop:
(ends 2:20 PM)
5:20 a.m.
5:30 a.m.
5:45 a.m.
5:55 a.m.
6 a.m.
Workshop:
(ends 12:20 PM)
6:05 a.m.
6:15 a.m.
7 a.m.
Workshop:
(ends 2:10 PM)
8 a.m.
8:30 a.m.
8:55 a.m.
Workshop:
(ends 6:05 PM)
9 a.m.
10:50 a.m.
Workshop:
(ends 7:30 PM)