Humans, animals, and other living beings have a natural ability to autonomously learn throughout their lives and quickly adapt to their surroundings, but computers lack such abilities. Our goal is to bridge such gaps between the learning of living-beings and computers. We are machine learning researchers with an expertise in areas such as approximate inference, Bayesian statistics, continuous optimization, information geometry, etc. We work on a variety of learning problems, especially those involving supervised, continual, active, federated, online, and reinforcement learning. Please check out research and publications pages for a more exhaustive overview.
If you are interested in joining us, see the people page and the news below for current opportunities.
See here for details.
“Approximate Bayesian Inference”, the editorial of a forthcoming special issue in Entropy, written by P. Alquier, is now published: paper. You can submit a paper until the end of Feb. 2021. Special Issue on Approximate Bayesian Inference
The videos of two recent talks by Pierre Alquier are now online:
- “Regret bound for online variational inference” (Oct. 29) - Workshop on online decision making, Berkeley
- “Estimation with the MMD distance” (Nov. 4) - DataSig seminar series
A series of seminars on “Bayesian principles for learning machines” held by Emtiyaz Khan will take place at the following dates and locations.
Pierre Alquier joined the editorial board of JMLR.
Our paper on continual learning by functional regularization on the memorable past is accepted as an oral presentation at NeurIPS 2020.