Reading Group
We hold biweekly “reading-group” meetings (duration: 60 minutes). Every two weeks, a different group member takes the lead, presenting a chosen paper for discussion. Additionally, to the biweekly schedule, we frequently invite external speakers to give talks or tutorials.
Upcoming
Reading Group. Emtiyaz Khan will moderate a paper discussion, topic TBA.
Reading Group. Christopher Anders will moderate a paper discussion, topic TBA.
Past Meetings
Seminar. Oliver Eberle. “Analyzing the Inner Workings of Foundation Models: Towards Insights, Transparency, and AI Safety” doorkeeper
Reading Group. Anita Yang will moderate a paper discussion, topic TBA.
Reading Group. Yohan Jung will moderate a paper discussion. Paper 1, Paper 2.
Reading group. Mutual feedback on ICLR submissions.
Reading group. Kai Arulkumaran will give a talk with the title “Towards Brain-Controlled Robots”. doorkeeper.
Discussion. Internal group discussion (quarterly meeting). Note this reading-group is on a Wednesday!
Reading Group. JinYeong Bak. Memoria: Resolving Fateful Forgetting Problem through Human-Inspired Memory Architecture. doorkeeper
Invited Talk. François-Xavier Briol gives a talk on Robust and Conjugate Gaussian Process Regression. doorkeeper
Invited Talk. Patrick Shafto will give a talk on Mathematical foundations for learning agents. doorkeeper
Show all reading group meetings
Invited Talk. Tom Burns will give a talk on Semantically-correlated memories in a dense associative model. doorkeeper
Invited Talk. Talks by Fred Kunstner (Adaptive Methods in Machine Learning and Why Adam Works so Well) and Aaron Mishkin (Optimal Sets and Solution Paths of ReLU Networks). doorkeeper1, doorkeeper2
Internal Meeting. Anita introduces her research to the group.
Internal Discussion. Mutual feedback on NeurIPS submissions.
Reading Group. Dharmesh Tailor: Practice talk on Learning to Defer.
Reading Group. Peter Nickl moderates a discussion on compositionality.
Talk. Masaki Adachi: “Probabilistic Numerics for Scientific Experts”. doorkeeper
Talk. Rafael Cabral: “A Bayesian workflow for efficient and automatic model checking and robustness analysis”. doorkeeper
Reading group. Zhedong will moderate a paper discussion on GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection.
Reading group. Zwane Sicelukwanda will give a talk on Safe Robot Motion Generation With Gaussian Processes. doorkeeper
Reading group. Hugo will moderate a paper discussion on “A view of Estimation of Distribution Algorithms through the lens of Expectation-Maximization”.
Reading group. Pratik Chaudhari gives a talk on ‘A Picture of the Prediction Space of Deep Networks’. doorkeeper
Internal meeting. Mutual feedback on the group’s ICML submissions.
Internal meeting. Catching up in the new year, discussion on upcoming events and deadlines in 2024.
Reading Group. Keigo moderates a discussion on Junk DNA-Hypothesis. The reading group will be on a winter-break afterwards, and resume in the new year.
Reading Group. Thomas moderates a discussion on minimum description length, deep-learning and Bayes.
Reading Group. Geoffrey moderates a discussion on A Geometric Interpretation of the Metropolis–Hastings Algorithm.
Reading Group. Dharmesh moderates a discussion on the paper On Masked Pre-training and the Marginal Likelihood.
Discussion. Mutual Feedback on ICLR submissions.
Quarterly Internal Group Meeting.
Talk. Samuel Kaski: Collaborative Machine Learning for Science [doorkeeper]
Talk. Martin Mundt: Self-Expanding Neural Networks [doorkeeper]
Talk. Krikamol Muandet: (Im)possibility of Collective Intelligence [doorkeeper]
Talk. Geoffrey Wolfer: From Distributions to Markov Chains: Recent Advances in Inference and Geometry. [doorkeeper]
Reading group. Peter: Studying Large Language Model Generalization with Influence Functions.
Talk. Rajesh Ranganath on out of distribution generalization. [doorkeeper]
Long Seminar. Etash Guha gives a tutorial on conformal prediction. Naima Borras speaks about local learning.
Talk. So Takao: Improving data-assimilation for weather forecasting: A graph-based Bayesian perspective.
Talk. Molei Tao on Optimization and Sampling in non-Euclidean spaces.
Reading group. Lu: Does Label Smoothing Mitigate Label Noise?
Reading group. Hugo: Exploring Example Influence in Continual Learning
Reading group. Quarterly internal group meeting.
Talk. Graham Neubig Is My NLP Model Working? The Answer is Harder Than You Think. doorkeeper link
Talk. Alexandre Pouget will speak about the Bayesian brain. doorkeeper link
Reading group. Etash and Yuesong’s research overview.
Reading group. Joe will give a talk on memory.
Reading group. Happy will moderate a discussion on in-context learning.
Reading group. Negar will moderate a discussion on Diffusion models as weighted ELBOs.
Talk. Talk by Eren Mehmet Kiral on The Lie-Group Bayesian Learning Rule.
Discussion. Session on productive time management in academia.
Reading group. Gianma will moderate a discussion on implicit bias of SGD.
Reading group. Thomas will talk about variational bounds.
Internal group meeting.
Informal meeting, chatting, catch-up.
Mutual feedback on ICML papers.
Reading Group. Christmas reading group (paper to be decided, bring cookies).
Reading Group. Discussion about interesting papers from NeurIPS 2022 and ICLR 2023.
Internal group meeting.
Talk. Keigo Nishida (Osaka University) will talk about his work on AdamB: Decoupled Bayes by Backprop With Gaussian Scale Mixture Prior.
Reading Group. Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective.
Reading Group. Unusual day (Tuesday)! Discussion of the paper Machine Theory of Mind.
Seminar. Unusual time. Talk by Cuong N. Nguyen on Generalization Bounds for Deep Transfer Learning Using Majority Predictor Accuracy
Seminar. Unusual day and time (Tuesday). Talk by Karim Lounici on Meta-Learning Representations with Contextual Linear Bandits
Reading Group. Discussion of the paper Formalizing the Generalization-Forgetting Trade-off in Continual Learning.
Reading Group. Discussion of the paper Fortuitous Forgetting in Connectionist Networks.
Reading Group. Discussion of recent papers on natural-gradient descent Paper 1, Paper 2, Paper 3.
Seminar. Talk by David Tomàs Cuesta on his lab rotation project.
Reading Group. Discussion of Upside-down Reinforcement Learning, Another Paper.
Reading Group. Unusual day and time (Friday)!! Discussion of notable ICML, ICLR and COLT papers.
Talk. Unusual day and time (Thursday). Presentation by Itay Evron on “How catastrophic can catastrophic forgetting be in linear regression?”
Reading Group. Discussion of the Paper The Art and Science of Cause and Effect.
Internal group meeting.
Seminar. Talk by Geoffrey Wolfer on Inference in Markov Chain from a Single Finite Trajectory.
Seminar. Tojo’s term project presentation and research discussion on continual RL.
Reading Group. Discussion of the Paper by Hassabis et al., Neuroscience-Inspired Artificial Intelligence.
Seminar. Talk by Vincent Fortuin. Title: On the Importance of Priors in Bayesian Deep Learning.
Reading group. Discussion of AISTATS 2022 papers.
Reading Group. Discussion of the Paper by Bubeck and Sellke, Universal Law of Robustness via Isoperimetry.
Reading Group. Discussion of Ten Simple Rules for Structuring Papers.
Reading Group. Discussion of the paper Deep Learning through the Lens of Example Difficulty.
Reading Group. Discussion of the paper by Kaplan and Friston, Planning and Navigation as Active Inference.
Seminar. Talk by Lionel Riou-Durand. Title: Metropolis Adjusted Underdamped Langevin Trajectories: a robust alternative to Hamiltonian Monte-Carlo.
Reading Group. Discussion of the paper A Survey of Exploration Methods in Reinforcement Learning.
Internal group meeting.
Reading Group. Discussion of the paper [Connections and Equivalences between the Nyström Method and Sparse Variational Gaussian Processes].
Reading Group. Discussion of the paper [Information-Theoretic Generalization Bounds for Stochastic Gradient Descent].
After holiday catch-up and discussion on well-being in ML.
Reading Group. Discussion of the paper [The Seven Tools of Causal Inference with Reflections on Machine Learning].
Seminar. Term project talk by Ted Tinker.
Seminar. Talk by Sébastien Loustau. Title: Deep learning theory for power-efficient algorithms. Slides.
Reading Group. Discussion of the paper [Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples].
Reading Group. Discussion of the paper [Dual Parametrization of Sparse Variational Gaussian Processes].
Seminar. Talk by Ilsang Ohn on Adaptive variational Bayes: Optimality, computation and applications. Register via Doorkeeper.
Reading Group. Discussion of the paper [Precision and the Bayesian Brain].
Reading Group. Discussion of the paper [The Geometry of Abstract Learned Knowledge in the Hippocampus].
Reading Group. Discussion of the paper by van der Hoeven et al.: [The Many Faces of Exponential Weights in Online Learning].
Internal group meeting.
Reading Group. Discussion of the paper by Khan and Swaroop, 2021: [Knowledge-Adaptation Priors]
Reading Group. Discussion of the paper by Pearce et al., 2021: [Understanding Softmax Confidence and Uncertainty]
Reading Group. Discussion of the paper by Hara et. al., 2019: [Data Cleansing for Models Trained with SGD]
Reading Group. Discussion of the paper by Khan and Rue, 2021: [The Bayesian Learning Rule]
AIP Seminar. Talk by Jiaxin Shi: “Sampling with Mirrored Stein Operators”. [Paper] [Doorkeeper]
Casual chat in gather.town.
Reading Group. Discussion of the paper by Li et al., 2019: [Hessian based analysis of SGD for Deep Nets: Dynamics and Generalization]
Reading Group. Discussion of the paper by Tsividis et al., 2021: [Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration and Planning]
Reading Group. Discussion of the paper by Hobbhahn and Hennig (2021): [Laplace Matching for Fast Approximate Inference in Generalized Linear Models]
Seminar. Leaving presentation by Dharmesh Tailor
Reading group. Discussion of the paper by Raj, Musco and Mackey (2020): [Importance Sampling via Local Sensitivity]
Reading group. Discussion of ICML 2021 papers.
Reading group. Discussion of the paper by Azoury and Warmuth (2001): [Relative Loss Bounds for On-Line Density Estimation with the Exponential Family of Distributions]
Reading group. Discussion of several knowledge distillation papers: [Paper 1], [Paper 2], [Paper 3]
Social chat in gathertown.
AIP seminar. Talk by Jooyeon Kim: “Stochastic optimal control and probabilistic modeling for detecting and reducing fake news”
AIP seminar. Talk by Adeline Fermanian on signature for machine learning.
Seminar. Talk by David Frazier. Title: loss-based variational Bayes prediction.
Internal group meeting.
Seminar. Talk by Julyan Arbel. Title: Approximate Bayesian computation with surrogate posteriors [Paper]
Mutual feedback on NeurIPS papers.
Reading group. Discussion of ICLR 2021 papers.
Reading group. On the origin of implicit regularization in stochastic gradient descent. [Paper]
Reading group. Discussion of AISTATS 2021 papers.
Talk. Talk by Dhruva Tirumala: Behavior Priors for Efficient Reinforcement Learning. [Paper]
Tutorial. Tutorial by Thomas Möllenhoff on convex duality: part 2.
Talk + Tutorial. Talk by Fariz Ikhwantri on “Knowledge distillation with DNN2GP” + tutorial by Thomas Möllenhoff on convex duality.
Informal group discussion.
Seminar, unusual day and time! Talk by Daniele Calandriello: “Scalable Determinantal Point Processes for Machine Learning”.
Seminar. Talk by Eric Nalisnick: “Predictive Complexity Priors”. [Paper]
Internal group meeting.
Seminar. Talk by Andrew Foong and David Burt: “On the Expressiveness of Approximate Inference in Bayesian Neural Networks”. [Paper]
Talk by Khimya Khetarpal and Matt Riemer: “Towards Continual Reinforcement Learning: A Review and Perspectives”. [Paper]
Well-being chat.
Seminar. Talk by Blair Bilodeau and Jeffrey Negrea: “Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization”. [Paper]
Mutual feedback on ICML submissions.
Catch up meeting.
Seminar, unusual day! Erik.
Reading group. Qi Qian, Hao Li, Juhua Hu: “Efficient Kernel Transfer in Knowledge Distillation”. [Paper]
Seminar. Peter, Thomas. Uncertainty estimation for Bayesian Neural Networks using infinite-width nets.
Pre-ICLR paper discussion.
Reading group. Ben Recht: “A Tour of Reinforcement Learning: The View from Continuous Control”. [Paper]
Team work review for the past 4 months. Siddarth’s talk on functional regularization on the memorable past. [Paper]
Unusual day and time! Dimitri Meunier: “Meta Learning meets Variational Inference. Learning priors with guarantees”.
Feedback discussion on the new team webpage.
Talk by Benjamin Guedj: “A (condensed) primer on PAC-Bayesian Learning”. [Slides]
Talk by François-Xavier Briol: “Stein’s Method for Computational Statistics and Machine Learning”. [Slides]
Research chat and virtual breakfast/lunch/dinner in gather.town.
Siddharth Swaroop: A survey on federated learning.
Happy Buzaaba and Fariz Ikhwantri: A survey on transfer learning. [Slides]
Lab members project overview and discussion.
Pierre’s grant presentation rehearsal and feedback.
Reading group. “Gradient descent for wide two-layer neural networks”. [Blog post]
Reading group. “Generalized Variational Inference: Three arguments for deriving new posteriors”. [Paper]
Reading group. “On the measure of intelligence”. [Paper]
Whole week: comments on ICML tutorials and talks.
Pre-ICML paper discussion.
Talk by Evgenii Egorov. “Involutive MCMC: one way to derive them all”. [Paper]
Talk by Giulia Denevi on Efficient Lifelong Learning Algorithms: Regret Bounds and Statistical Guarantees.
Internal kickoff meeting.
Talk by Peter Nickl on “Variational Bayes for Infinite Mixtures of Local Regressors with Robotics Applications”. [Thesis]
Talk by Gian Maria Marconi on Manifold Regression by Structured Prediction: methodology and applications. [Paper]
Feedback discussion on our NeurIPS submissions.
Talk by Thomas Möllenhoff on Flat Metric Minimization with Applications in Generative Modeling. [Paper]
Talk by Evgenii Egorov on MaxEntropy Pursuit Variational Inference. [Paper]
Reading Group. “Continual Deep Learning by Functional Regularisation of the Memorable Past”. [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 2 (end). [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 2. [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 1. [Paper]