Reading group. On the origin of implicit regularization in stochastic gradient descent. [Paper]
Seminar. Videos of the AIP open seminar of the Deep Learning Theory team.
Tutorial. Tutorial by Thomas Möllenhoff on convex duality: part 2.
Talk + Tutorial. Talk by Fariz Ikhwantri on “Knowledge distillation with DNN2GP” + tutorial by Thomas Möllenhoff on convex duality.
Informal group discussion.
Seminar, unusual day and time! Talk by Daniele Calandriello: “Scalable Determinantal Point Processes for Machine Learning”.
Internal group meeting.
Show all reading group meetings
Mutual feedback on ICML papers.
Catch up meeting.
Seminar, unusual day! Erik.
Reading group. Qi Qian, Hao Li, Juhua Hu: “Efficient Kernel Transfer in Knowledge Distillation”. [Paper]
Seminar. Peter, Thomas. Uncertainty estimation for Bayesian Neural Networks using infinite-width nets.
Pre-ICLR paper discussion.
Reading group. Ben Recht: “A Tour of Reinforcement Learning: The View from Continuous Control”. [Paper]
Team work review for the past 4 months. Siddarth’s talk on functional regularization on the memorable past. [Paper]
Unusual day and time! Dimitri Meunier: “Meta Learning meets Variational Inference. Learning priors with guarantees”.
Feedback discussion on the new team webpage.
Research chat and virtual breakfast/lunch/dinner in gather.town.
Siddharth Swaroop: A survey on federated learning.
Lab members project overview and discussion.
Pierre’s grant presentation rehearsal and feedback.
Reading group. “Gradient descent for wide two-layer neural networks”. [Blog post]
Reading group. “Generalized Variational Inference: Three arguments for deriving new posteriors”. [Paper]
Reading group. “On the measure of intelligence”. [Paper]
Whole week: comments on ICML tutorials and talks.
Pre-ICML paper discussion.
Talk by Evgenii Egorov. “Involutive MCMC: one way to derive them all”. [Paper]
Talk by Giulia Denevi on Efficient Lifelong Learning Algorithms: Regret Bounds and Statistical Guarantees.
Internal kickoff meeting.
Talk by Peter Nickl on “Variational Bayes for Infinite Mixtures of Local Regressors with Robotics Applications”. [Thesis]
Feedback discussion on our NeurIPS submissions.
Talk by Evgenii Egorov on MaxEntropy Pursuit Variational Inference. [Paper]
Reading Group. “Continual Deep Learning by Functional Regularisation of the Memorable Past”. [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 2 (end). [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 2. [Paper]
Reading Group. Shalev-Shwartz: “Introduction to online learning”, Chapter 1. [Paper]