Humans, animals, and other living beings have a natural ability to autonomously learn throughout their lives and quickly adapt to their surroundings, but computers lack such abilities. Our goal is to bridge such gaps between the learning of living-beings and computers. We are machine learning researchers with an expertise in areas such as approximate inference, Bayesian statistics, continuous optimization, information geometry, etc. We work on a variety of learning problems, especially those involving supervised, continual, active, federated, online, and reinforcement learning. Please check out research and publications pages for a more exhaustive overview.
If you are interested in joining us, see the people page and the news below for current opportunities.
Tenured research scientist position available, see the official job posting.
See here for details.
On arXiv today: User-friendly introduction to PAC-Bayes bounds by P. Alquier
Our Bayes-duality project is launched with a funding of $2.76 million by JST-ANR’s CREST proposal
Two papers accepted at NeurIPS 2021:
- Knowledge-Adaptation Priors, by M.E. Khan and S. Swaroop
- Dual parameterization of SVGP, by P. Chang, V. ADAM, M.E. Khan, A. Solin
Paper by Dimitri Meunier & Pierre Alquier accepted+published in Entropy: [Meta-strategy for Learning Tuning Parameters with Guarantees]