My DPhil is about the development of formal methods for multi-agent and machine learning systems in order to help build provably safe and beneficial AI. My research falls primarily at the intersection of game theory, reinforcement learning, and control and verification. I am supervised by Michael Wooldridge, Alessandro Abate, and Julian Gutierrez, and am also a DPhil Affiliate at the Future of Humanity Institute. Before coming to Oxford I worked as an intern on Imandra and was a research assistant for Jacques Fleuriot at the University of Edinburgh, where I completed my MSc in artificial intelligence under the supervision of Vaishak Belle. Prior to this I studied for my BSc in mathematics and philosophy at the University of Warwick with Walter Dean. For information about my previous work see some of the links below, or alternatively my CV.
My research interests are broad but mostly centre around the combination of statistical and symbolic AI, and in particular how this can be used to improve robustness, explainability, and safety. Currently these interests are focussed on developing techniques to help rigourously identify or induce particular properties of multi-agent systems under their game-theoretic equilibria, especially those systems that operate in uncertain (partially known, partially observable, stochastic, etc.) environments. Related topics that I have worked on before include statistical relational learning, formal verification of learnt models, and deep symbolic reinforcement learning. Much of my work is also concerned with the problem of how agents represent and reason about preferences in the face of uncertainty. Examples of this include projects on representing non-Markovian reward structures using automata, learning models of cognitive biases using inverse reinforcement learning, and my master's dissertation on computational frameworks for moral decision-making. Thirdly and finally, I also take an interest in the governance, ethics, and societal impact of AI, though I do not research these topics full-time.
Outside of academia my main passion is music, but I also love film and art. I like to travel whenever I get the opportunity, especially to Scandinavia (where I previously lived), and in my spare time I enjoy reading, (vegan) cooking, print-making, and clubbing. Ethically and politically I consider myself an effective altruist (yes, it's a bit of strange name), a humanist, and a fabian (more generally, a democratic socialist).
13.03.21 - This weekend I'll be (virtually) attending the Stanford Existential Risks Conference.
31.03.21 - I'm excited to be joining The Future Society as part of their 2021-22 Affiliate Cohort, in order to assist with project development and implementation regarding their work on AI governance.
12.03.21 - The website for the Causal Incentives Working Group, a set of researchers from DeepMind, Oxford, and Toronto working to develop a causal theory of incentives, is now online. Check out our recent papers and software via the link above, and feel free to get in touch if interested in this work.
18.12.20 - I'm happy to announce that not one but two papers I co-authored earlier this year, "Multi-Agent Reinforcement Learning with Temporal Logic Specifications" and "Equilibrium Refinements for Multi-Agent Influence Diagrams: Theory and Practice", have been accepted as full papers at AAMAS-21.
15.11.20 - The work from my master's thesis in Edinburgh was recently published in Data Mining and Knowledge Discovery as Learning Tractable Probabilistic Models for Moral Responsibility and Blame.
22.09.20 - I helped to write the new Young Fabians pamplet on AI, which launches today at the Labour Party Annual Conference 2020.
29.07.20 - I'm flattered to have been selected for a Departmental Teaching Award due to "excellent student feedback" as a class tutor for the AI and Computational Game Theory courses at Oxford this academic year.
15.02.20 - I'll be giving a short presentation on modelling agent incentives, and joining a panel discussion on AI alignment (alongside Rohin Shah and Michael Cohen) at next week's AI Ethics Meetup in London.
09.12.19 - My previous supervisor and co-author Vaishak Belle will be presenting our poster on Tractable Probabilistic Models for Moral Responsibility at the Knowledge Representation & Reasoning Meets Machine Learning (KR2ML) Workshop at NeurIPS on the 13th of December.
13.09.19 - This weekend I'll be in Switzerland attending the AI Governance Careers workshop at ETH Zurich.
01.07.19 - I'm very excited to be joining Aesthetic Integration as a research and engineering intern for the coming months in Edinburgh, where I'll be working on Imandra - a cloud-native automated reasoning engine.
17.05.19 - I have been invited to give a short presentation and join a panel discussion on 'Fair machines: Student perspective on Data Justice and Ethics' as part of the Data Justice Week taking place in Edinburgh between the 20th-24th of May.
26.04.19 - I will be attending the third AI Safety Camp in Ávila, Spain for the next week and a half, where I am working on a group project to improve preference elicitation by first learning models of biased and mistaken behaviour in agents.
09.04.19 - A short version of my first paper, Deep Tractable Probabilistic Models for Moral Responsibility, has been accepted for presentation at the Human-Like Computing Third Wave of AI Workshop (3AI-HLC 2019) taking place at Imperial College London on the 26th of April.
03.03.19 - I am delighted to have accepted an offer to study for a DPhil in computer science at the University of Oxford later this year, which will be generously funded by an EPSRC Doctoral Training Partnership studentship.