A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix, (Preprint)T. Doan, M. Abbana Bennani, B. Mazoure, G. Rabusseau, P. Alquier
[arXiv]
Estimation of Copulas via Maximum Mean Discrepancy, (Preprint)P. Alquier, B.-E. Chérief-Abdellatif, A. Derumigny, J.-D. Fermanian
[arXiv]
Non-exponentially weighted aggregation: regret bounds for unbounded loss functions, (Preprint)P. Alquier
[arXiv]
Universal Robust Regression via Maximum Mean Discrepancy, (Preprint)P. Alquier, M. Gerber
[arXiv]
Learning Algorithms from Bayesian Principles, (Early draft, Work in progress)M.E. Khan, H. Rue
[Draft version]
Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence, (Preprint)B.E. Chérief-Abdellatif, P. Alquier
[arXiv]
Simultaneous dimension reduction and clustering via the NMF-EM algorithm, (To appear in Advances in Data Analysis and Classification)L. Carel, P. Alquier
[Paper]
[arXiv]
2020
Continual Deep Learning by Functional Regularisation of Memorable Past (NeurIPS 2020)P. Pan*, S. Swaroop*, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan
[Published version]
[ArXiv]
[Code]
[Poster] Oral presentation, 1% of all submisssions (105 out of 9454 submissions).
Fast Variational Learning in State-Space Gaussian Process Models, (MLSP 2020)P. E. Chang, W. J. Wilkinson, M.E. Khan, A. Solin
[Published version]
[arXiv]
Training Binary Neural Networks using the Bayesian Learning Rule, (ICML 2020) X. Meng, R. Bachmann, M.E. Khan
[Published version]
[arXiv] [Code]
Handling the Positive-Definite Constraint in the Bayesian Learning Rule, (ICML 2020) W. Lin, M. Schmidt, M.E. Khan
[Published version]
[arXiv]
VILD: Variational Imitation Learning with Diverse-quality Demonstrations, (ICML 2020) V. Tangkaratt, B. Han, M.E. Khan, M. Sugiyama
[Published version]
[arXiv]
MMD-Bayes: Bayesian Estimation via Maximum Mean Discrepancy, (AABI 2019)B.E. Chérief-Abdellatif, P. Alquier
[Published version]
[arXiv]
Exact Recovery of Low-rank Tensor Decomposition under Reshuffling, (AAAI 2020)C. Li, M.E. Khan, Z. Sun, G. Niu, B. Han, S. Xie, Q. Zhao
[arXiv]
2019
Practical Deep Learning with Bayesian Principles, (NeurIPS 2019)K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R.E. Turner, R. Yokota, M.E. Khan.
[Published version]
[arXiv] [Code]
Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019)M.E. Khan, A. Immer, E. Abedi, M. korzepa.
[Published version]
[arXiv] [Code]
SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient, (NeurIPS 2018)A. Miskin, F. Kunstner, D. Nielsen, M. Schmidt, M.E. Khan.
[ Published version]
[arXiv]
[Poster]
[3-min Video]
[Code]
Fast yet Simple Natural-Gradient Descent for Variational Inference in Complex Models, (ISITA 2018)M.E. Khan and D. Nielsen,
[arXiv] [IEEE explore] [Slides]
Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam, (ICML 2018)M.E. Khan, D. Nielsen, V. Tangkaratt, W. Lin, Y. Gal, and A. Srivastava,
[ Published version]
[arXiv]
[Code]
[Slides]
Variational Message Passing with Structured Inference Networks, (ICLR 2018) W. Lin, N. Hubacher, and M.E. Khan,
[Paper]
[arXiv Version]
[Code]
Bayesian Nonparametric Poisson-Process Allocation for Time-Sequence Modeling, (AI-Stats 2018) H. Ding, M.E. Khan, I. sato, M. Sugiyama,
[Published version]
[Code]