Outline
[Research] [Blog] [Talk] [Code]
Research
Humans, animals, and other living beings have a natural ability to autonomously learn throughout their lives and quickly adapt to their surroundings, but computers lack such abilities. Our goal is to bridge such gaps between the learning of living-beings and computers. We are machine learning researchers with an expertise in areas such as approximate inference, Bayesian statistics, continuous optimization, information geometry etc. We work on a variety of learning problems, especially those involving supervised, continual, active, federated, online, and reinforcement learning.
Currently, we are developing algorithms which enable computers to autonomously learn to perceive, act, and reason throughout their lives. Our research often brings together ideas from a variety of theoretical and applied fields, such as, mathematical optimization, Bayesian statistics, information geometry, signal processing, and control systems.
For more information, see our current publications and the following pages:
- The Bayes-Duality Project (funded by CREST)
- Summary of research for the year [2023] [2022] [2021] [2019] [2018] [2017]
We are also thankful to receive the following external funding (funding amount is approximate),
- (2023-2026, USD 32,000) KAKENHI Grant-in-Aid for Early-Career Scientists, [23K13024]
- (2021-2026, USD 2.23 Million) JST-CREST and French-ANR's grant, The Bayes-Duality Project
- (2020-2023, USD 167,000) KAKENHI Grant-in-Aid for scientific Research (B), Life-Long Deep Learning using Bayesian Principles
- (2019-2022, USD 237,000) External funding through companies for several Bayes related projects
Blog
This blog provides a medium for our researchers to present their recent research findings, insights and updates. The posts in the blog are written with a general audience in mind and aim to provide an accessible introduction to our research.
A few tips for the Area Chairs of ML conferences
What makes a good meta-review? It clearly describes the whole review process and gives clear reasons behind the decisions. Below are a few tips for writing Area Chairs (or Editors) of Machine Learning Conferences and...Minimax Estimation and Identity Testing of Markov Chains
We briefly review the two classical problems of distribution estimation and identity testing (in the context of property testing), then propose to extend them to a Markovian setting. We will see that the sample complexity...Natural-Gradient Variational Inference 2: ImageNet Scale
In our previous post, we derived a natural-gradient variational inference (NGVI) algorithm for neural networks, detailing all our approximations and providing intuition. We saw it converge faster than more naive variational inference algorithms on relatively...Natural-Gradient Variational Inference 1: The Maths
Bayesian Deep Learning hopes to tackle neural networks’ poorly-calibrated uncertainties by injecting some level of Bayesian thinking. There has been mixed success: progress is difficult as scaling Bayesian methods to such huge models is difficult!...Universal estimation with Maximum Mean Discrepancy (MMD)
A very old and yet very exciting problem in statistics is the definition of a universal estimator $\hat{\theta}$. An estimation procedure that would work all the time. Close your eyes, push the button, it works,...Talk
The recorded talks given by our team members.
- [July 30th, 2024] Emtiyaz Khan: Keynote at the 3rd Conference on Lifelong Learning Agents (CoLLAs) 2024, [Slides] [Video]
Code
Here we list research code that has been open-sourced to accompany recent publications. Our team’s github homepage: https://github.com/team-approx-bayes.
- (ICML 2024) Variational Learning is Effective for Large Deep Networks, [arXiv] [Code]
- (ICLR 2024) Model Merging by Uncertainty-Based Gradient Matching, [arXiv] [Code]
- (ICLR 2024) Conformal Prediction via Regression-as-Classification, [preprint] [Code]
- (NeurIPS 2023) The Memory Perturbation Equation: Understanding Models' Sensitivity to Data [arXiv] [Code]
- (TMLR 2023) Improving Continual Learning by Accurate Gradient Reconstructions of the Past [Published Version] [Code]
- (AISTATS 2023) The Lie-Group Bayesian Learning Rule [arXiv] [Code]
- (ICLR 2023) SAM as an Optimal Relaxation of Bayes [arXiv] [Code]
- (NeurIPS 2021) Knowledge-Adaptation Priors. [arXiv] [Slides] [Tweet] [SlidesLive Video] [Code]
- (NeurIPS 2021) Dual Parameterization of Sparse Variational Gaussian Processes. [arXiv] [Code]
- (AISTATS 2021) Improving predictions of Bayesian neural networks via local linearization. [Published version] [arXiv] [Code]
- (ICML 2021) Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning. [Published version] [arXiv] [Code]
- (UAI 2021) Subset-of-Data Variational Inference for Deep Gaussian-Process Regression. [arXiv] [Code]
- (NeurIPS 2020) Continual Deep Learning by Functional Regularisation of Memorable Past. [Published version] [ArXiv] [Code] [Poster]
- (ICML 2020) Handling the Positive-Definite Constraint in the Bayesian Learning Rule. [Published version] [arXiv] [Code]
- (ICML 2020) Training Binary Neural Networks using the Bayesian Learning Rule. [Published version] [arXiv] [Code]
- (ICML 2020) VILD: Variational Imitation Learning with Diverse-quality Demonstrations. [Published version] [arXiv] [Code]
- (NeurIPS 2019) Practical Deep Learning with Bayesian Principles. [Published version] [arXiv] [Code]
- (NeurIPS 2019) Approximate Inference Turns Deep Networks into Gaussian Processes. [Published version] [arXiv] [Code]
-
(ICML, 2019)
Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations.
[arXiv] [Published version][Code]
- (ICML, 2019) Scalable Training of Inference Networks for Gaussian-Process Models. [arXiv] [ Published version][Code]
-
(NeurIPS 2018)
SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient.
[ Published version] [arXiv] [Poster] [3-min Video] [Code] - (ICML 2018) Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam. [ Published version] [arXiv] [Code] [Slides]
- (ICLR 2018) Variational Message Passing with Structured Inference Networks. [Paper] [arXiv Version] [Code]
- (AIstats 2018) Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling. [Published version] [Code]
- (AIstats 2017) Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models. [ Published version ] [arXiv ] [Code for Logistic Reg + GPs] [Code for Correlated Topic Model]
- (38th IEEE Symposium on Security and Privacy 2017(S&P)) SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices. [Published paper] [Code] [SmarPer Homepage]
-
(Building Simulation 2017)
Gaussian-Process-Based Emulators for Building Performance Simulation.
[Paper] [Building Simulation Data] [Code]