Mike Gimelfarb
  1. Email
  2. Github
  3. Google Scholar
  4. ResearchGate

Hi!

I'm currently a PhD student in the department of Mechanical and Industrial Engineering (MIE) at the University of Toronto, supervised by Prof. Scott Sanner and Prof. Chi-Guhn Lee. My thesis is focused on improvement the reliability and safety of modern reinforcement learning methods, by explicitly leveraging models of epistemic and aleatory uncertainty to help make RL more reliable. My work leverages theory and practical applications of (approximate) Bayesian inference, transfer and knowledge representation, ensemble methods and risk theory. I completed an internship at DeepMind in 2022, and I was previously a post-graduate affiliate of the Vector Institute from 2020 to 2022.

Prior to this, I completed my master's degree (MASc) in the same department under supervision of Prof. Michael J. Kim. My thesis was focused on the theoretical developments of Thompson sampling applied in queueing and admission control problems with demand uncertainty. I received my Bachelor's degree in Business Administration (BBA) from the Schulich School of Business in 2014, graduating with distinction.

I enjoy reading books on cognitive science and play classical piano in my spare time.

Selected Publications

Conservative Bayesian Model-Based Value Expansion for Offline Policy Optimization
Jihwan Jeong, Xiaoyu Wang, Michael Gimelfarb, Hyunwoo Kim, Baher Abdulhai, Scott Sanner
Arxiv, 2022

A Distributional Framework for Risk-Sensitive End-to-End Planning in Continuous MDPs
Noah Patton, Jihwan Jeong, Michael Gimelfarb, Scott Sanner
AAAI, 2022

Risk-Aware Transfer in Reinforcement Learning using Successor Features
Michael Gimelfarb, Andre Barreto, Scott Sanner, Chi-Guhn Lee
NeurIPS, 2021

End-to-End Risk-Aware Planning by Gradient Descent
Noah Patton, Jihwan Jeong, Michael Gimelfarb, Scott Sanner
ICAPS Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning, 2021

Bayesian Experience Reuse for Learning from Multiple Demonstrators
Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
IJCAI, 2021

Contextual Policy Transfer in Reinforcement Learning Domains via Deep Mixtures-of-Experts
Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
UAI, 2021

ε-BMC: A Bayesian Ensemble Approach to Epsilon-Greedy Exploration in Model-Free Reinforcement Learning
Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
UAI, 2019

Reinforcement Learning with Multiple Experts: A Bayesian Model Combination Approach
Michael Gimelfarb, Scott Sanner, Chi-Guhn Lee
NeurIPS, 2018

Thompson Sampling for the Control of a Queue with Demand Uncertainty
Michael Gimelfarb, Michael J. Kim
Master's Thesis, 2017

Selected Projects

A reusable implementation for successor features for transfer in deep reinforcement learning in tensorflow and Keras.
Michael Gimelfarb

A general framework for building and training constructive neural networks in tensorflow and keras.
Michael Gimelfarb

Plain Academic