Humans, animals, and other living beings have a natural ability to autonomously learn throughout their lives and quickly adapt to their surroundings, but computers lack such abilities. Our goal is to bridge such gaps between the learning of living-beings and computers. We are machine learning researchers with an expertise in areas such as approximate inference, Bayesian statistics, continuous optimization, information geometry etc. We work on a variety of learning problems, especially those involving supervised, continual, active, federated, online, and reinforcement learning.
Currently, we are developing algorithms which enable computers to autonomously learn to perceive, act, and reason throughout their lives. Our research often brings together ideas from a variety of theoretical and applied fields, such as, mathematical optimization, Bayesian statistics, information geometry, signal processing, and control systems.
For more information, see our current publications and the following pages:
- Approximate Bayesian Inference Unit at OIST
- The Bayes-Duality Project (funded by CREST)
- Summary of our research for the years [ 2021, 2020 ] [ 2019 ] [ 2018 ] [ 2017 ]
We are also thankful to receive the following external funding (funding amount is approximate),
- (2023-2026, USD 32,000) KAKENHI Grant-in-Aid for Early-Career Scientists, [23K13024]
- (2021-2026, USD 2.23 Million) JST-CREST and French-ANR's grant, The Bayes-Duality Project
- (2020-2023, USD 167,000) KAKENHI Grant-in-Aid for scientific Research (B), Life-Long Deep Learning using Bayesian Principles
- (2019-2022, USD 237,000) External funding through companies for several Bayes related projects
This blog provides a medium for our researchers to present their recent research findings, insights and updates. The posts in the blog are written with a general audience in mind and aim to provide an accessible introduction to our research.
Here we list research code that has been open-sourced to accompany recent publications.
- (AISTATS 2023) The Lie-Group Bayesian Learning Rule [arXiv] [Code]
- (ICLR 2023) SAM as an Optimal Relaxation of Bayes [arXiv] [Code]
- (NeurIPS 2021) Knowledge-Adaptation Priors. [arXiv] [Slides] [Tweet] [SlidesLive Video] [Code]
- (NeurIPS 2021) Dual Parameterization of Sparse Variational Gaussian Processes. [arXiv] [Code]
- (AISTATS 2021) Improving predictions of Bayesian neural networks via local linearization. [Published version] [arXiv] [Code]
- (ICML 2021) Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning. [Published version] [arXiv] [Code]
- (UAI 2021) Subset-of-Data Variational Inference for Deep Gaussian-Process Regression. [arXiv] [Code]
- (NeurIPS 2020) Continual Deep Learning by Functional Regularisation of Memorable Past. [Published version] [ArXiv] [Code] [Poster]
- (ICML 2020) Handling the Positive-Definite Constraint in the Bayesian Learning Rule. [Published version] [arXiv] [Code]
- (ICML 2020) Training Binary Neural Networks using the Bayesian Learning Rule. [Published version] [arXiv] [Code]
- (ICML 2020) VILD: Variational Imitation Learning with Diverse-quality Demonstrations. [Published version] [arXiv] [Code]
- (NeurIPS 2019) Practical Deep Learning with Bayesian Principles. [Published version] [arXiv] [Code]
- (NeurIPS 2019) Approximate Inference Turns Deep Networks into Gaussian Processes. [Published version] [arXiv] [Code]
Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations.
[arXiv] [Published version][Code]
- (ICML, 2019) Scalable Training of Inference Networks for Gaussian-Process Models. [arXiv] [ Published version][Code]
SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient.
[ Published version] [arXiv] [Poster] [3-min Video] [Code]
- (ICML 2018) Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam. [ Published version] [arXiv] [Code] [Slides]
- (ICLR 2018) Variational Message Passing with Structured Inference Networks. [Paper] [arXiv Version] [Code]
- (AIstats 2018) Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling. [Published version] [Code]
- (AIstats 2017) Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models. [ Published version ] [arXiv ] [Code for Logistic Reg + GPs] [Code for Correlated Topic Model]
- (38th IEEE Symposium on Security and Privacy 2017(S&P)) SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices. [Published paper] [Code] [SmarPer Homepage]
(Building Simulation 2017)
Gaussian-Process-Based Emulators for Building Performance Simulation.
[Paper] [Building Simulation Data] [Code]