Make AI agents steerable and aligned
We engineer robust frameworks that ensure complex AI systems operate predictably and always remain under human control.
Backed by ARIA, the UK's high-risk, high-reward research agency.
Our team is featured in the ARIA TA2 Creators program, bringing expertise from Oxford, MILA, and Anthropic.
Our Team
World-class researchers and engineers united by the mission to build safer AI systems

Director of Recursive Safeguarding Ltd
Oxford Univ ARIA SGAI TA1 & Opportunity Seed
Finishing PhD, formerly at MILA (worked with Yoshua Bengio)
Worked on LLM hallucinations at Cohere
Technical Cofounder and CTO of RightPick

Oxford Univ Nightingale Fellow
Cofounder and Director of Quro Medical
Moving to faculty role at NTU Singapore
Machine Learning Researcher (computational statistics, deep generative modeling, causal inference)

Oxford Univ Professor
ARIA SGAI TA1 grants & Opportunity Seed (SynthStats: GFlowNet-finetuning)
ERC Consolidator Grant BLAST: Better Languages for Statistics

Faculty at Bath Univ
Expert in programming language theory
Expert in formal verification and categorical semantics

Oxford Univ PhD student in AI and theory
SERI MATS scholar and OpenPhil funded
Published work on probabilistic programming and RL safety
Independent researcher and entrepreneur
Led Exa-scale Safeguards Research at Anthropic
Scientist AI with Yoshua Bengio at Mila
Co-founder of Alignment team at TII (Falcon 1 LLM)
Previous EdTech startup co-founder, exited

Oxford Univ postdoc
Projects with ARIA SGAI TA1 through Oxford and Adjoint Labs Ltd
Leading contributor to Markov category theory

Tech and start-up background (Munich, Oxford, Cambridge)
Advisory board of Smith School of Enterprise and the Environment
Let's Connect
Whether you're an investor, researcher, or organization interested in AI safety, we'd love to hear from you.
Send us a message
Or email us directly at
