List of Publications
For a list of code releases, see our research page.
Early Drafts/Preprints
-
Optimistic Estimation of Convergence in Markov Chains with the Average-Mixing Time,
(Preprint) [arXiv] -
Dimension-free Bounds for Sum of Dependent Matrices and Operators with Heavy-Tailed Distribution,
(Preprint) [arXiv] -
Variance-Aware Estimation of Kernel Mean Embedding,
(Preprint) [arXiv] -
Optimal Quasi-Bayesian Reduced Rank Regression with Incomplete Response,
(Preprint) [arXiv] -
Concentration and Robustness of Discrepancy-based ABC via Rademacher Complexity,
(Preprint) [arXiv]
In press / to appear
-
Geometric Aspects of Data-Processing of Markov Chains,
(Transactions of Mathematics and Its Applications) [arXiv]
2024
-
Variational Low-Rank Adaptation Using IVON,
(Fine-Tuning in Modern ML (FITML) at NuerIPS 2024) [OpenReview]
-
Variational Learning is Effective for Large Deep Networks,
(ICML 2024) [arXiv] [Blog] [Code]
-
Position Paper: Bayesian Deep Learning in the Age of Large-Scale AI,
(ICML 2024) [arXiv]
-
Improved Estimation of Relaxation Time in Non-reversible Markov Chains,
(Annals of Applied Probability) [Published version] [arXiv]
-
Model Merging by Uncertainty-Based Gradient Matching,
(ICLR 2024) [arXiv] [Code]
-
Conformal Prediction via Regression-as-Classification,
(ICLR 2024) [OpenReview] [ArXiv] [Code] [Package]
2023
-
Improving Continual Learning by Accurate Gradient Reconstructions of the Past,
(TMLR) [ OpenReview ] [ Code ]
-
The Bayesian Learning Rule,
(JMLR) [ Published version ] [ arXiv ] [ Tweet ]
-
The Memory Perturbation Equation: Understanding Model’s Sensitivity to Data,
(NeurIPS 2023) [ arXiv ] [ SlidesLive Video ] [ Poster] [Code] -
Bridging the Gap Between Target Networks and Functional Regularization,
(TMLR) [ Openreview ] -
Variational Bayes Made Easy,
(AABI 2023) [arXiv]
-
Estimation of Copulas via Maximum Mean Discrepancy,
(JASA) [Journal version] [arXiv] -
Empirical and Instance-Dependent Estimation of Markov Chain and Mixing Time,
(Scandinavian Journal of Statistics) [arXiv] [Journal version] -
Systematic Approaches to Generate Reversiblizations of Markov Chains,
(IEEE Transactions on Information Theory) [arXiv] [Early Access] -
Learning and Identity Testing of Markov Chains,
(Handbook of Statistics, Volume 49) [Journal version] -
Exploiting Inferential Structure in Neural Processes,
(UAI 2023) [Published version] [arXiv] [Poster] -
Information Geometry of Markov Kernels: a Survey,
in "Advances in Information Geometry: Beyond the Conventional Approach",
(Front. Phys. Sec. Statistical and Computational Physics) [Journal version] -
MasakhaPOS: Part-of-Speech Tagging for Typologically Diverse African Languages,
(ACL 2023) [arXiv] -
Geometric Reduction for Identity Testing of Reversible Markov Chains,
(GSI 2023) [Published version] [arXiv]
Oral presentation. -
Simplifying Momentum-based Riemannian Submanifold Optimization,
(ICML 2023) [ ArXiv ]
-
Memory-Based Dual Gaussian Processes for Sequential Learning,
(ICML 2023) P. E. Chang, P. Verma, S. T. John, A. Solin, M.E. Khan</span> -
Dimension-Free Empirical Entropy Estimation,
(IEEE Transactions on Information Theory) [Journal version] [arXiv] -
The Lie-Group Bayesian Learning Rule,
(AISTATS 2023) [arXiv] [Code] -
SAM as an Optimal Relaxation of Bayes,
(ICLR 2023) [arXiv] [Code]
Notable top-5% of all accepted papers.
2022
-
Sequential Learning in GPs with Memory and Bayesian Leverage Score,
(Continual Lifelong Workshop at ACML 2022) , [ OpenReview ]
-
MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition,
(EMNLP 2022) [arXiv] -
Practical Structured Riemannian Optimization with Momentum by using Generalized Normal Coordinates,
(NeuReps Workshop at NeurIPS 2022) , [ OpenReview ]
-
Can Calibration Improve Sample Prioritization?,
(HITY Workshop at NeurIPS 2022) [ OpenReview ]
-
Exploiting Inferential Structure in Neural Processes,
(Workshop on Tractable Probabilistic Modeling at UAI 2022 ) [ OpenReview ] [Video] [Poster]
-
Deviation Inequalities for Stochastic Approximation by Averaging,
(SPA) [Published version] [arXiv] -
Understanding the Population Structure Correction Regression,
(ICSTA 2022) [Published version] [arXiv] -
Approximate Bayesian Inference: Reprint of the Special Issue Published in Entropy,
(MDPI Books) [Book page] -
Tight Risk Bound for High Dimensional Time Series Completion,
(EJS) [Published version] [arXiv] -
Finite Sample Properties of Parametric MMD Estimation: Robustness to Misspecification and Dependence,
(Bernoulli) [Published version] [arXiv]
2021
-
Knowledge-Adaptation Priors,
(NeurIPS 2021) [Published version] [arXiv] [Slides] [Tweet] [SlidesLive Video] [Code] -
Dual Parameterization of Sparse Variational Gaussian Processes,
(NeurIPS 2021) [Published version] [arXiv] [Code] -
Meta-strategy for Learning Tuning Parameters with Guarantees,
(Entropy) [Published version] [arXiv] -
Subset-of-Data Variational Inference for Deep Gaussian-Process Regression,
(UAI 2021) , [Published version] [arXiv] [Code] -
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning,
(ICML 2021) [Published version] [arXiv] [Code] -
Tractable Structured Natural Gradient Descent Using Local Parameterizations,
(ICML 2021) [Published version] [arXiv] -
Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions,
(ICML 2021) [Published version] [arXiv] -
Improving Predictions of Bayesian Neural Networks via Local Linearization,
(AIstats 2021) [Published version] [arXiv] [Code] -
A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix,
(AISTATS 2021) [Published version] [arXiv] -
Simultaneous Dimension Reduction and Clustering via the NMF-EM Algorithm,
(Advances in Data Analysis and Classification) [Published version] [arXiv]
2020
-
Continual Deep Learning by Functional Regularisation of Memorable Past
(NeurIPS 2020) [Published version] [ArXiv] [Code] [Poster] Oral presentation, 1% of all submissions (105 out of 9454 submissions).
-
Approximate Bayesian Inference,
(Entropy) [Paper] -
Concentration of tempered posteriors and of their variational approximations,
(Annals of Statistics) [Published version] [arXiv] -
Fast Variational Learning in State-Space Gaussian Process Models,
(MLSP 2020) [Published version] [arXiv] -
High-dimensional VAR with low-rank transition,
(Statistics and Computing) [Published version] [arXiv] -
AI for Social Good: Unlocking the Opportunity for Positive Impact,
(Nature Communications 2020)
[Paper] [Declaration on AI2SG] [Dagstuhl AI4SG 2019] [Press Release] -
Training Binary Neural Networks using the Bayesian Learning Rule,
(ICML 2020) [Published version] [arXiv] [Code] -
Handling the Positive-Definite Constraint in the Bayesian Learning Rule,
(ICML 2020) [Published version] [arXiv] [Code] -
VILD: Variational Imitation Learning with Diverse-quality Demonstrations,
(ICML 2020) [Published version] [arXiv] [Code] -
MMD-Bayes: Bayesian Estimation via Maximum Mean Discrepancy,
(AABI 2019) [Published version] [arXiv] -
Exact Recovery of Low-rank Tensor Decomposition under Reshuffling,
(AAAI 2020) , [arXiv]
2019
-
Practical Deep Learning with Bayesian Principles,
(NeurIPS 2019) . [Published version] [arXiv] [Code] -
Approximate Inference Turns Deep Networks into Gaussian Processes,
(NeurIPS 2019) . [Published version] [arXiv] [Code] -
A Generalization Bound for Online Variational Inference (best paper award),
(ACML 2019) . [Published version] [arXiv] -
Matrix factorization for multivariate time series analysis,
(EJS) [Published version] [arXiv] -
Fast and Simple Natural-Gradient Variational Inference with Mixture of Exponential-family Approximations,
(ICML, 2019) . [arXiv] [Published version] [Code]
Also appeared at the Symposium on Advances in Approximate Bayesian Inference at NeurIPS 2018 [Short Paper]
-
Stein's Lemma for the Reparameterization Trick with Exponential Family Mixtures,
(ICML workshop on Stein's Method in ML and Stats, 2019) . [arXiv] -
Scalable Training of Inference Networks for Gaussian-Process Models,
(ICML, 2019) . [arXiv] [ Published version] [Code] -
TD-Regularized Actor-Critic Methods,
(Machine Learning, 2019. A short version appeared at EWRL 2018)
. [ Published version] [arXiv] [Short version at EWRL 2018]
2018
-
SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient,
(NeurIPS 2018) . [ Published version] [arXiv] [Poster] [3-min Video] [Code] -
Natural Variational Continual Learning,
(Continual Learning Workshop at NIPS 2018)
[Paper]. -
Fast yet Simple Natural-Gradient Descent for Variational Inference in Complex Models,
(ISITA 2018) , [arXiv] [IEEE explore] [Slides] -
Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam,
(ICML 2018) , [ Published version] [arXiv] [Code] [Slides] -
Variational Message Passing with Structured Inference Networks,
(ICLR 2018) , [Paper] [arXiv Version] [Code] -
Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling,
(AI-Stats 2018) , , [Published version] [Code]
2017
-
Vprop: Variational Inference using RMSprop,
(NIPS 2017, Workshop on Bayesian Deep Learning)
[Workshop version] [Poster] -
Variational Adaptive-Newton Method for Explorative-Learning,
(NIPS 2017, Workshop on Advances in Approximate Bayesian Inference)
[arXiv Version] [Poster] -
Natural-Gradient Stochastic Variational Inference for Non-Conjugate Structured Variational Autoencoder,
(ICML 2017, Workshop on Deep Structure Prediction) , [Paper] -
Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models,
(AIstats 2017) [ Published version ] [arXiv ] [Code for Logistic Reg + GPs] [Code for Correlated Topic Model] -
SmarPer: Context-Aware and Automatic Runtime-Permissions for Mobile Devices,
(38th IEEE Symposium on Security and Privacy (S&P;), San Jose, CA, USA, May 22-24, 2017)
[Published paper] [Code] [SmarPer Homepage] -
Gaussian-Process-Based Emulators for Building Performance Simulation,
(Building Simulation 2017) [Paper] [Building Simulation Data] [Code]