Humans, animals, and other living beings have a natural ability to autonomously learn throughout their lives and quickly adapt to their surroundings, but computers lack such abilities. Our goal is to bridge such gaps between the learning of living-beings and computers. We are machine learning researchers with an expertise in areas such as approximate inference, Bayesian statistics, continuous optimization, information geometry, etc. We work on a variety of learning problems, especially those involving supervised, continual, active, federated, online, and reinforcement learning. Please check out research and publications pages for a more exhaustive overview.
If you are interested in joining us, see the people page and the news below for current opportunities.
Position for 3 year contract.
See here for details.
16th AIP Open Seminar: talks by Approximate Bayesian Inference Team. Talks by
- Emtiyaz Khan: Bayesian principles for Learning-Machines,
- Dharmesh Tailor: Memorable Experiences of Learning-Machines,
- Pierre Alquier: Meta-Strategy for Hyperparameter Tuning with Guarantees.
Two papers were accepted at AISTATS2021:
- A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix by T. Doan, M. Abbana Bennani, B. Mazoure, G. Rabusseau, P. Alquier.
- Improving predictions of Bayesian neural networks via local linearization by A. Immer, M. Korzepa, M. Bauer.
“Approximate Bayesian Inference”, the editorial of a forthcoming special issue in Entropy, written by P. Alquier, is now published: paper. You can submit a paper until the end of Feb. 2021. Special Issue on Approximate Bayesian Inference
The videos of two recent talks by Pierre Alquier are now online:
- “Regret bound for online variational inference” (Oct. 29) - Workshop on online decision making, Berkeley
- “Estimation with the MMD distance” (Nov. 4) - DataSig seminar series
A series of seminars on “Bayesian principles for learning machines” held by Emtiyaz Khan will take place at the following dates and locations.