Recursive Safeguarding

Make AI agents steerable and aligned

We engineer robust frameworks that ensure complex AI systems operate predictably and always remain under human control.

ARIA Logo

Backed by ARIA, the UK's high-risk, high-reward research agency.

Our team is featured in the ARIA TA2 Creators program, bringing expertise from Oxford, MILA, and Anthropic.

Provable Safety Through Formal Models

Approach Diagram Visualization

We develop frameworks centered on verifiable world models to ensure the safe and trustworthy deployment of AI agents. Our approach is built on the principle of keeping "humans in the loop by design" to guarantee that AI actions cohere with latent human preferences.

Our methodology provides robust safety guarantees by enabling domain experts to synthesise a formal, auditable world model and explicit safety specification. We then generate a policy for an AI agent together with a mathematical proof demonstrating that the agent, operating within this framework, will never violate the spec. By leveraging formal verification and proof assistants alongside modern large-scale machine learning stack, we reduce catastrophic risk and build systems that are robust, controllable, and faithful to human intent in complex, real-world environments.

Our Team

Younesse Kaddar

Younesse Kaddar

Director of Recursive Safeguarding Ltd

Oxford Univ ARIA SGAI TA1 & Opportunity Seed

Finishing PhD, formerly at MILA (worked with Yoshua Bengio)

Worked on LLM hallucinations at Cohere

Technical Cofounder and CTO of RightPick

Rob Cornish

Rob Cornish

Oxford Univ Nightingale Fellow

Cofounder and Director of Quro Medical

Moving to faculty role at NTU Singapore

Machine Learning Researcher (computational statistics, deep generative modeling, causal inference)

Sam Staton

Sam Staton

Oxford Univ Professor

ARIA SGAI TA1 grants & Opportunity Seed (SynthStats: GFlowNet-finetuning)

ERC Consolidator Grant BLAST: Better Languages for Statistics

Pedro Amorim

Pedro Amorim

Faculty at Bath Univ

Expert in programming language theory

Expert in formal verification and categorical semantics

Nikolaj Jensen

Nikolaj Jensen

Oxford Univ PhD student in AI and theory

ARIA SGAI TA1 via Adjoint Labs Ltd

Jacek Karwowski

Jacek Karwowski

Oxford Univ PhD student in AI and theory

SERI MATS scholar and OpenPhil funded

Published work on probabilistic programming and RL safety

MM

Mohammed Mahfoud

Independent researcher and entrepreneur

Led Exa-scale Safeguards Research at Anthropic

Scientist AI with Yoshua Bengio at Mila

Co-founder of Alignment team at TII (Falcon 1 LLM)

Previous EdTech startup co-founder, exited

Paolo Perrone

Paolo Perrone

Oxford Univ postdoc

Projects with ARIA SGAI TA1 through Oxford and Adjoint Labs Ltd

Leading contributor to Markov category theory

Ali Zein

Ali Zein

Tech and start-up background (Munich, Oxford, Cambridge)

Advisory board of Smith School of Enterprise and the Environment

Contact Us

We are currently operating in stealth mode while developing our core technology. For investment inquiries or to learn more, please reach out.

Send us a message

Email Us Directly

contact@recursive-safeguarding.org

We welcome inquiries from investors, researchers, and organizations interested in AI safety. Our team will respond to serious inquiries within 48 hours.