I’m a joint JD-PhD (Computer Science, AI) candidate at Stanford University where I’m lucky enough to be advised by Dan Jurafsky for my PhD and Dan Ho for my JD at Stanford Law School. I’m also an OpenPhilanthropy AI Fellow and a Graduate Student Fellow at the Regulation, Evaluation, and Governance Lab. At Stanford Law School, I co-led the Domestic Violence Pro Bono Project, worked on client representation with the Three Strikes Project, and contributed to the Stanford Native Law Pro Bono Project. Previously, I was lucky enough to be advised by David Meger and Joelle Pineau for my M.Sc. at McGill University and the Montréal Institute for Learning Algorithms.
I also spent time as a Software Engineer and Applied Scientist at Amazon AWS/Alexa, worked with Justice Cuéllar at the California Supreme Court, am a part-time researcher with the Internal Revenue Service Research, Applied Analytics and Statistics Division, and am a Technical Advisor at the Institute for Security+Technology.
My research focuses on aligning machine learning, law, and policy for responsible real-world deployments. This alignment process is two-fold: (1) guided by law, policy and ethics, develop general AI systems capable of safely tackling longstanding challenges in government and society; (2) empowered by a deep technical understanding of AI, make sure that laws & policies keep general AI systems safe and beneficial for all.
Some of my work has received coverage by TechCrunch, Science, The Wall Street Journal, Bloomberg, and more. I also occasionally post a roundup of news at the intersection of AI, Law, and Policy, the latest of which is here. Overall, I’m interested in a wide range of other technical machine learning research, policy, and legal work, so get in touch if you’d like to collaborate!