Mohammad Emtiyaz Khan

Professsor Khan lecturing.
Mohammad Emtiyaz Khan
External Professor
PhD University of British Columbia, Vancouver, Canada

Profile

Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. Previously, he was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.

Professional Experience

  • RIKEN Center for Advanced Intelligence Project (Team Leader, 2016-current)
  • Tokyo University of Agriculture and Technology (Visiting Professor, 2018-2021)
  • École polytechnique fédérale de Lausanne (Research Collaborator, 2015-2015)
  • École polytechnique fédérale de Lausanne (Postdoctoral Fellow, 2013-2014)

Awards

  • Best paper award, Asian Conference on Machine Learning, 2019
  • Teaching award, 2015, École polytechnique fédérale de Lausanne (EPFL)

Select Publications

  1. Pan, P. and Swaroop, S. and Immer, A. and Eschenhagen, R. and Turner, R. and Khan, M. E., (2020), Continual Deep Learning by Functional Regularisation of Memorable Past, Advances in Neural Information Processing Systems 33, pages 4453–4464.
  2. Tomašev, N., Cornebise, J., Hutter, S. Mohamed, A. Picciariello, B. Connelly, D. Belgrave, D. Ezer, F. Haert, F. Mugisha, G. Abila, H. Arai, …Khan, M.E, R. D. Winne, T. Schaul, C. Clopath. (2020). AI for social good: unlocking the opportunity for positive impact. Nature Communications 11, 2468.
  3. Meng, X., Bachmann, R. and Khan, M. E. (2020). Training Binary Neural Networks using the Bayesian Learning Rule. Proceedings of 37th International Conference on Machine Learning.
  4. Osawa, K., Swaroop, S., Jain, A., Eschenhagen, R., Turner, R. E., Yokota, R., and Khan, M. E. (2019). Practical Deep Learning with Bayesian Principles. Advances in Neural Information Processing Systems 32, pages 4287–4299.
  5. Khan, M. E., Immer, A., Abedi, E., and Korzepa, M. (2019). Approximate Inference Turns Deep Networks into Gaussian Processes. Advances in Neural Information Processing Systems 32, pages 3094–3104.
  6. Chérief-Abdellatif, B.-E., Alquier, P., and Khan, M. E. (2019). A Generalization Bound for Online Variational Inference. Proceedings of the 11th Asian Conference on Machine Learning, volume 101 of Proceedings of Machine Learning Research, pages 662–677.
  7. Parisi, S., Tangkaratt, V., Peters, J., and Khan, M. E. (2019). TD-regularized actor-critic methods. Machine Learning, 108(8-9), pages 1467–1501.
  8. Khan, M. E., Nielsen, D., Tangkaratt, V., Lin, W., Gal, Y., and Srivastava, A. (2018). Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam. Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2611–2620.
  9. Khan, M. E. and Lin, W. (2017). Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models. Proceedings of the 20th International Conf. on Artificial Intelligence and Statistics, volume 54, pages 878–887.
  10. Khan, M. E., Aravkin, A., Friedlander, M., and Seeger, M. (2013). Fast Dual Variational Inference for Non-Conjugate Latent Gaussian Models. Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 951–959.