Invited Talk

How blockers can turn into a paper: A retrospective on 'Towards The Systematic Reporting of the Energy and Carbon Footprints of Machine Learning'

Talk based on the paper, 'Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning' which was written with co-authors: Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. I reflect on the journey …

Separating Value Functions Across Time-scales

Talk based on paper with co-authors: Joshua Romoff, Ahmed Touati, Emma Brunskill, Joelle Pineau, and Yann Ollivier. In many finite horizon episodic reinforcement learning (RL) settings, it is desirable to optimize for the undiscounted return - in …

Benchmarking and Evaluation in Inverse Reinforcement Learning

Benchmarks are particularly useful for characterizing algorithms and determining their usefulness in different settings. Here I highlight the need for more standardization of performance evaluations in inverse reinforcement learning.

Reproducibility and Replicability in Deep Reinforcement Learning (and Other Deep Learning Methods)

In recent years, significant progress has been made in solving challenging problems using deep learning. Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, …

Show Me the Data! On the Reproducibility of Policy Gradient Methods for Continuous Control

Talk based on work with co-authors: Riashat Islam, Maziar Gomrokchi and Doina Precup. A brief discussion on some difficulties that students may encounter in reproducing modern policy gradient methods in continuous control tasks and best practices …

Practical Tutorial on Policy Gradients for Continuous Control

A practical tutorial session on implementing and running policy gradient methods for continuous control tasks.