Weekly Reading Group
Upcoming
Reading group. Peter will moderate a discussion on (tba).
Reading group. Lu will moderate a discussion on (tba).
Reading group. Hugo will moderate a discussion on (tba).
Reading group. Quarterly internal group meeting.
Talk. Alexandre Pouget will speak about the Bayesian brain. doorkeeper link
Past Meetings
Reading group. Etash and Yuesong’s research overview.
Reading group. Joe will give a talk on memory.
Reading group. Happy will moderate a discussion on in-context learning.
Reading group. Negar will moderate a discussion on Diffusion models as weighted ELBOs.
Talk. Talk by Eren Mehmet Kiral on The Lie-Group Bayesian Learning Rule.
Discussion. Session on productive time management in academia.
Reading group. Gianma will moderate a discussion on implicit bias of SGD.
Reading group. Thomas will talk about variational bounds.
Internal group meeting.
Show all reading group meetings
Informal meeting, chatting, catch-up.
Mutual feedback on ICML papers.
Reading Group. Christmas reading group (paper to be decided, bring cookies).
Reading Group. Discussion about interesting papers from NeurIPS 2022 and ICLR 2023.
Internal group meeting.
Talk. Keigo Nishida (Osaka University) will talk about his work on AdamB: Decoupled Bayes by Backprop With Gaussian Scale Mixture Prior.
Reading Group. Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective.
Reading Group. Unusual day (Tuesday)! Discussion of the paper Machine Theory of Mind.
Seminar. Unusual time. Talk by Cuong N. Nguyen on Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy
Seminar. Unusual day and time (Tuesday). Talk by Karim Lounici on Meta-Learning Representations with Contextual Linear Bandits
Reading Group. Discussion of the paper Formalizing the Generalization-Forgetting Trade-off in Continual Learning.
Reading Group. Discussion of the paper Fortuitous Forgetting in Connectionist Networks.
Reading Group. Discussion of recent papers on natural-gradient descent Paper 1, Paper 2, Paper 3.
Seminar. Talk by David Tomàs Cuesta on his lab rotation project.
Reading Group. Discussion of Upside-down Reinforcement Learning, Another Paper.
Reading Group. Unusual day and time (Friday)!! Discussion of notable ICML, ICLR and COLT papers.
Talk. Unusual day and time (Thursday). Presentation by Itay Evron on “How catastrophic can catastrophic forgetting be in linear regression?”
Reading Group. Discussion of the Paper The Art and Science of Cause and Effect.
Internal group meeting.
Seminar. Talk by Geoffrey Wolfer on Inference in Markov Chain from a Single Finite Trajectory.
Seminar. Tojo’s term project presentation and research discussion on continual RL.
Reading Group. Discussion of the Paper by Hassabis et al., Neuroscience-Inspired Artificial Intelligence.
Seminar. Talk by Vincent Fortuin. Title: On the Importance of Priors in Bayesian Deep Learning.
Reading group. Discussion of AISTATS 2022 papers.
Reading Group. Discussion of the Paper by Bubeck and Sellke, Universal Law of Robustness via Isoperimetry.
Reading Group. Discussion of Ten Simple Rules for Structuring Papers.
Reading Group. Discussion of the paper Deep Learning through the Lens of Example Difficulty.
Reading Group. Discussion of the paper by Kaplan and Friston, Planning and Navigation as Active Inference.
Seminar. Talk by Lionel Riou-Durand. Title: Metropolis Adjusted Underdamped Langevin Trajectories: a robust alternative to Hamiltonian Monte-Carlo.
Reading Group. Discussion of the paper A Survey of Exploration Methods in Reinforcement Learning.
Internal group meeting.
Reading Group. Discussion of the paper [Connections and Equivalences between the Nyström Method and Sparse Variational Gaussian Processes].
Reading Group. Discussion of the paper [Information-Theoretic Generalization Bounds for Stochastic Gradient Descent].
After holiday catch-up and discussion on well-being in ML.
Reading Group. Discussion of the paper [The Seven Tools of Causal Inference with Reflections on Machine Learning].
Seminar. Term project talk by Ted Tinker.
Seminar. Talk by Sébastien Loustau. Title: Deep learning theory for power-efficient algorithms. Slides.
Reading Group. Discussion of the paper [Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples].
Reading Group. Discussion of the paper [Dual Parametrization of Sparse Variational Gaussian Processes].
Seminar. Talk by Ilsang Ohn on Adaptive variational Bayes: Optimality, computation and applications. Register via Doorkeeper.
Reading Group. Discussion of the paper [Precision and the Bayesian Brain].
Reading Group. Discussion of the paper [The Geometry of Abstract Learned Knowledge in the Hippocampus].
Reading Group. Discussion of the paper by van der Hoeven et al.: [The Many Faces of Exponential Weights in Online Learning].
Internal group meeting.
Reading Group. Discussion of the paper by Khan and Swaroop, 2021: [Knowledge-Adaptation Priors]
Reading Group. Discussion of the paper by Pearce et al., 2021: [Understanding Softmax Confidence and Uncertainty]
Reading Group. Discussion of the paper by Hara et. al., 2019: [Data Cleansing for Models Trained with SGD]
Reading Group. Discussion of the paper by Khan and Rue, 2021: [The Bayesian Learning Rule]
AIP Seminar. Talk by Jiaxin Shi: “Sampling with Mirrored Stein Operators”. [Paper] [Doorkeeper]
Casual chat in gather.town.
Reading Group. Discussion of the paper by Li et al., 2019: [Hessian based analysis of SGD for Deep Nets: Dynamics and Generalization]
Reading Group. Discussion of the paper by Tsividis et al., 2021: [Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration and Planning]
Reading Group. Discussion of the paper by Hobbhahn and Hennig (2021): [Laplace Matching for Fast Approximate Inference in Generalized Linear Models]
Seminar. Leaving presentation by Dharmesh Tailor
Reading group. Discussion of the paper by Raj, Musco and Mackey (2020): [Importance Sampling via Local Sensitivity]
Reading group. Discussion of ICML 2021 papers.
Reading group. Discussion of the paper by Azoury and Warmuth (2001): [Relative Loss Bounds for On-Line Density Estimation with the Exponential Family of Distributions]
Reading group. Discussion of several knowledge distillation papers: [Paper 1], [Paper 2], [Paper 3]
Social chat in gathertown.
AIP seminar. Talk by Jooyeon Kim: “Stochastic optimal control and probabilistic modeling for detecting and reducing fake news”
AIP seminar. Talk by Adeline Fermanian on signature for machine learning.
Seminar. Talk by David Frazier. Title: loss-based variational Bayes prediction.
Internal group meeting.
Seminar. Talk by Julyan Arbel. Title: Approximate Bayesian computation with surrogate posteriors [Paper]
Mutual feedback on NeurIPS papers.
Reading group. Discussion of ICLR 2021 papers.
Reading group. On the origin of implicit regularization in stochastic gradient descent. [Paper]
Reading group. Discussion of AISTATS 2021 papers.
Talk. Talk by Dhruva Tirumala: Behavior Priors for Efficient Reinforcement Learning. [Paper]
Tutorial. Tutorial by Thomas Möllenhoff on convex duality: part 2.
Talk + Tutorial. Talk by Fariz Ikhwantri on “Knowledge distillation with DNN2GP” + tutorial by Thomas Möllenhoff on convex duality.
Informal group discussion.
Seminar, unusual day and time! Talk by Daniele Calandriello: “Scalable Determinantal Point Processes for Machine Learning”.
Seminar. Talk by Eric Nalisnick: “Predictive Complexity Priors”. [Paper]
Internal group meeting.
Seminar. Talk by Andrew Foong and David Burt: “On the Expressiveness of Approximate Inference in Bayesian Neural Networks”. [Paper]
Talk by Khimya Khetarpal and Matt Riemer: “Towards Continual Reinforcement Learning: A Review and Perspectives”. [Paper]
Well-being chat.
Seminar. Talk by Blair Bilodeau and Jeffrey Negrea: “Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization”. [Paper]
Mutual feedback on ICML submissions.
Catch up meeting.
Seminar, unusual day! Erik.
Reading group. Qi Qian, Hao Li, Juhua Hu: “Efficient Kernel Transfer in Knowledge Distillation”. [Paper]
Seminar. Peter, Thomas. Uncertainty estimation for Bayesian Neural Networks using infinite-width nets.
Pre-ICLR paper discussion.
Reading group. Ben Recht: “A Tour of Reinforcement Learning: The View from Continuous Control”. [Paper]
Team work review for the past 4 months. Siddarth’s talk on functional regularization on the memorable past. [Paper]
Unusual day and time! Dimitri Meunier: “Meta Learning meets Variational Inference. Learning priors with guarantees”.
Feedback discussion on the new team webpage.
Talk by Benjamin Guedj: “A (condensed) primer on PAC-Bayesian Learning”. [Slides]
Talk by François-Xavier Briol: “Stein’s Method for Computational Statistics and Machine Learning”. [Slides]
Research chat and virtual breakfast/lunch/dinner in gather.town.
Siddharth Swaroop: A survey on federated learning.
Happy Buzaaba and Fariz Ikhwantri: A survey on transfer learning. [Slides]
Lab members project overview and discussion.
Pierre’s grant presentation rehearsal and feedback.
Reading group. “Gradient descent for wide two-layer neural networks”. [Blog post]
Reading group. “Generalized Variational Inference: Three arguments for deriving new posteriors”. [Paper]
Reading group. “On the measure of intelligence”. [Paper]
Whole week: comments on ICML tutorials and talks.
Pre-ICML paper discussion.
Talk by Evgenii Egorov. “Involutive MCMC: one way to derive them all”. [Paper]
Talk by Giulia Denevi on Efficient Lifelong Learning Algorithms: Regret Bounds and Statistical Guarantees.
Internal kickoff meeting.
Talk by Peter Nickl on “Variational Bayes for Infinite Mixtures of Local Regressors with Robotics Applications”. [Thesis]
Talk by Gian Maria Marconi on Manifold Regression by Structured Prediction: methodology and applications. [Paper]
Feedback discussion on our NeurIPS submissions.
Talk by Thomas Möllenhoff on Flat Metric Minimization with Applications in Generative Modeling. [Paper]
Talk by Evgenii Egorov on MaxEntropy Pursuit Variational Inference. [Paper]
Reading Group. “Continual Deep Learning by Functional Regularisation of the Memorable Past”. [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 2 (end). [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 2. [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 1. [Paper]