PAC-Bayesian Offline Contextual Bandits With Guarantees, (Preprint)O. Sakhi, N. Chopin, P. Alquier
[arXiv]
Dimension-free Bounds for Sum of Dependent Matrices and Operators with Heavy-Tailed Distribution, (Preprint)S. Nigikata, P. Alquier, M. Imaizuimi
[arXiv]
Variance-Aware Estimation of Kernel Mean Embedding, (Preprint)G. Wolfer, P. Alquier
[arXiv]
Empirical and Instance-Dependent Estimation of Markov Chain and Mixing Time, (Preprint)G. Wolfer
[arXiv]
Geometric Aspects of Data-Processing of Markov Chains, (Preprint)G. Wolfer, S. Watanabe
[arXiv]
Optimal Quasi-Bayesian Reduced Rank Regression with Incomplete Response, (Preprint)T. T. Mai, P. Alquier
[arXiv]
Concentration and Robustness of Discrepancy-based ABC via Rademacher Complexity, (Preprint)S. Legramanti, D. Durante, P. Alquier
[arXiv]
User-Friendly Introduction to PAC-Bayes Bounds, (Preprint)P. Alquier,
[ arXiv ]
The Bayesian Learning Rule, (Preprint)M.E. Khan, H. Rue
[ arXiv ]
Beyond Target Networks: Improving Deep Q-learning with Functional Regularization, (Preprint)A. Piché, J. Marino, G. M. Marconi, C. Pal, M. E. Khan
[arXiv]
Universal Robust Regression via Maximum Mean Discrepancy, (Preprint)P. Alquier, M. Gerber
[arXiv]
Estimation of Copulas via Maximum Mean Discrepancy, (JASA)P. Alquier, B.-E. Chérief-Abdellatif, A. Derumigny, J.-D. Fermanian
[Journal version]
[arXiv]
2023
Geometric Reduction for Identity Testing of Reversible Markov Chains, (GSI 2023)G. Wolfer, S. Watanabe
[arXiv] Oral presentation.
Simplifying Momentum-based Riemannian Submanifold Optimization, (ICML 2023)W. Lin, V. Duruisseaux, M. Leok, F. Nielsen, M.E. Khan, M. Schmidt
[ ArXiv ]
Memory-Based Dual Gaussian Processes for Sequential Learning, (ICML 2023)
P. E. Chang, P. Verma, S. T. John, A. Solin, M.E. Khan</span>
The Lie-Group Bayesian Learning Rule, (AISTATS 2023)E. M. Kiral, T. Möllenhoff, M. E. Khan
[arXiv]
SAM as an Optimal Relaxation of Bayes, (ICLR 2023)T. Möllenhoff, M. E. Khan
[arXiv] Notable top-5% of all accepted papers.
2022
Sequential Learning in GPs with Memory and Bayesian Leverage Score, (Continual Lifelong Workshop at ACML 2022)P. Verma, P. E. Chang, A. Solin, M.E. Khan
[ OpenReview ]
MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition, (EMNLP 2022)D. Adelani, G. Neubig, S. Ruder, S. Rijhwani, M. Beukman, C. Palen-Michel, C. Lignos,
J. Alabi, S. Muhammad, P. Nabende, B. Dione, A. Bukula, R. Mabuya, B. Dossou, B. Sibanda, H. Buzaaba, .....
[arXiv]
Practical Structured Riemannian Optimization with Momentum by using Generalized Normal Coordinates, (NeuReps Workshop at NeurIPS 2022)W. Lin, V. Duruisseaux, M. Leok, F. Nielsen, M.E. Khan, M. Schmidt
[ OpenReview ]
Can Calibration Improve Sample Prioritization?, (HITY Workshop at NeurIPS 2022)G. Tata, G. K. Gudur, G. Chennupati, M.E. Khan
[ OpenReview ]
Approximate Bayesian Inference: Reprint of the Special Issue Published in Entropy, (MDPI Books)P. Alquier (Editor)
[Book page]
Tight Risk Bound for High Dimensional Time Series Completion, (EJS)P. Alquier, N. Marie, A. Rosier
[Published version]
[arXiv]
Finite Sample Properties of Parametric MMD Estimation: Robustness to Misspecification and Dependence, (Bernoulli)B.E. Chérief-Abdellatif, P. Alquier
[Published version]
[arXiv]
Meta-strategy for Learning Tuning Parameters with Guarantees, (Entropy)D. Meunier, P. Alquier
[Published version]
[arXiv]
Subset-of-Data Variational Inference for Deep Gaussian-Process Regression, (UAI 2021) A. Jain, P.K. Srijith, M.E. Khan,
[Published version]
[arXiv]
[Code]
Scalable Marginal Likelihood Estimation for Model Selection in Deep Learning, (ICML 2021)A. Immer, M. Bauer, V. Fortuin, G. Rätsch, M. E. Khan
[Published version]
[arXiv]
[Code]
Tractable Structured Natural Gradient Descent Using Local Parameterizations, (ICML 2021)W. Lin, F. Nielsen, M. E. Khan, M. Schmidt
[Published version]
[arXiv]
Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions, (ICML 2021)P. Alquier
[Published version]
[arXiv]
Improving Predictions of Bayesian Neural Networks via Local Linearization, (AIstats 2021) A. Immer, M. Korzepa, M. Bauer
[Published version]
[arXiv]
[Code]
A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix, (AISTATS 2021)T. Doan, M. Abbana Bennani, B. Mazoure, G. Rabusseau, P. Alquier
[Published version]
[arXiv]
Continual Deep Learning by Functional Regularisation of Memorable Past (NeurIPS 2020)P. Pan*, S. Swaroop*, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan
[Published version]
[ArXiv]
[Code]
[Poster]
Oral presentation, 1% of all submissions (105 out of 9454 submissions).
Fast Variational Learning in State-Space Gaussian Process Models, (MLSP 2020)P. E. Chang, W. J. Wilkinson, M.E. Khan, A. Solin
[Published version]
[arXiv]
Training Binary Neural Networks using the Bayesian Learning Rule, (ICML 2020) X. Meng, R. Bachmann, M.E. Khan
[Published version]
[arXiv] [Code]
Handling the Positive-Definite Constraint in the Bayesian Learning Rule, (ICML 2020) W. Lin, M. Schmidt, M.E. Khan
[Published version]
[arXiv]
[Code]
VILD: Variational Imitation Learning with Diverse-quality Demonstrations, (ICML 2020) V. Tangkaratt, B. Han, M.E. Khan, M. Sugiyama
[Published version]
[arXiv]
[Code]
MMD-Bayes: Bayesian Estimation via Maximum Mean Discrepancy, (AABI 2019)B.E. Chérief-Abdellatif, P. Alquier
[Published version]
[arXiv]
Exact Recovery of Low-rank Tensor Decomposition under Reshuffling, (AAAI 2020)C. Li, M.E. Khan, Z. Sun, G. Niu, B. Han, S. Xie, Q. Zhao
[arXiv]
2019
Practical Deep Learning with Bayesian Principles, (NeurIPS 2019)K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R.E. Turner, R. Yokota, M.E. Khan.
[Published version]
[arXiv] [Code]
Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019)M.E. Khan, A. Immer, E. Abedi, M. korzepa.
[Published version]
[arXiv] [Code]
SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient, (NeurIPS 2018)A. Miskin, F. Kunstner, D. Nielsen, M. Schmidt, M.E. Khan.
[ Published version]
[arXiv]
[Poster]
[3-min Video]
[Code]
Fast yet Simple Natural-Gradient Descent for Variational Inference in Complex Models, (ISITA 2018)M.E. Khan and D. Nielsen,
[arXiv] [IEEE explore] [Slides]
Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam, (ICML 2018)M.E. Khan, D. Nielsen, V. Tangkaratt, W. Lin, Y. Gal, and A. Srivastava,
[ Published version]
[arXiv]
[Code]
[Slides]
Variational Message Passing with Structured Inference Networks, (ICLR 2018) W. Lin, N. Hubacher, and M.E. Khan,
[Paper]
[arXiv Version]
[Code]
Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling, (AI-Stats 2018) H. Ding, M.E. Khan, I. sato, M. Sugiyama,
[Published version]
[Code]