Recursive Safeguarding

Make AI agents steerable and aligned

We engineer robust frameworks that ensure complex AI systems operate predictably and always remain under human control.

ARIA Logo

Backed by ARIA, the UK's high-risk, high-reward research agency.

Our team is featured in the ARIA TA2 Creators program, bringing expertise from Oxford, MILA, and Anthropic.

The Researchers

Our Team

World-class researchers and engineers united by the mission to build safer AI systems

Younesse Kaddar

Director of Recursive Safeguarding Ltd

Oxford Univ ARIA SGAI TA1 & Opportunity Seed

Finishing PhD, formerly at MILA (worked with Yoshua Bengio)

Worked on LLM hallucinations at Cohere

Technical Cofounder and CTO of RightPick

Rob Cornish

Oxford Univ Nightingale Fellow

Cofounder and Director of Quro Medical

Moving to faculty role at NTU Singapore

Machine Learning Researcher (computational statistics, deep generative modeling, causal inference)

Sam Staton

Oxford Univ Professor

ARIA SGAI TA1 grants & Opportunity Seed (SynthStats: GFlowNet-finetuning)

ERC Consolidator Grant BLAST: Better Languages for Statistics

Pedro Amorim

Faculty at Bath Univ

Expert in programming language theory

Expert in formal verification and categorical semantics

Nikolaj Jensen

Oxford Univ PhD student in AI and theory

ARIA SGAI TA1 via Adjoint Labs Ltd

Jacek Karwowski

Oxford Univ PhD student in AI and theory

SERI MATS scholar and OpenPhil funded

Published work on probabilistic programming and RL safety

MM

Independent researcher and entrepreneur

Led Exa-scale Safeguards Research at Anthropic

Scientist AI with Yoshua Bengio at Mila

Co-founder of Alignment team at TII (Falcon 1 LLM)

Previous EdTech startup co-founder, exited

Paolo Perrone

Oxford Univ postdoc

Projects with ARIA SGAI TA1 through Oxford and Adjoint Labs Ltd

Leading contributor to Markov category theory

Ali Zein

Tech and start-up background (Munich, Oxford, Cambridge)

Advisory board of Smith School of Enterprise and the Environment

Get in Touch

Let's Connect

Whether you're an investor, researcher, or organization interested in AI safety, we'd love to hear from you.

Send us a message

Or email us directly at