The world’s first academic programme dedicated to AI Evaluation

Abstract collage of black and white photos, geometric shapes, and digital elements on a white background.

Why this Programme?

Artificial intelligence is advancing at unprecedented speed. Yet, as frontier AI systems grow more powerful, our ability to evaluate their capabilities and risks has not kept pace.

This programme exists to change that.

  • Talent Gap: AI Safety Institutes and leading labs worldwide face a shortage of experts in evaluation.

  • Unique Approach: We combine technical depth with policy and governance, bridging a gap no other programme fills.

  • Impact Pathway: Our graduates will be equipped to join top research labs, government agencies, and industry, where their skills are urgently needed.

This is the first step toward establishing AI Evaluation & Safety as a formal academic discipline - a foundation for the first MSc in the field.

Abstract geometric digital artwork with circles, rectangles, and lines in shades of blue, black, and gray.

Programme Snapshot

40 participants. 150 hours. One goal: to make AI accountable.

Format: From February to May 2026

  • 90 hours of online work (lectures + networking + activities)

  • 20 hours of hands-on courses

  • 40 hours in-person capstone week in Valencia, Spain

Students: 40 top global participants with fully funded scholarships

Credentials: 15 ECTS Expert Diploma by ValgrAI.

Learn more

Who’s Involved

This programme has the support of faculty from leading universities, including the University of Cambridge, Stanford, Princeton, Beijing Normal University, Renmin University of China, William & Mary, and the Technical University of Valencia.

Confirmed faculty also come from key institutions, research organizations, and companies such as the EU AI Office, the UK AI Safety Institute, CAIS, FAR AI, RAND, Epoch AI, Apollo Research, Redwood Research, Microsoft Research, and Google DeepMind.

Learn more

Sponsored by the Valencian Graduate School and Research Network of Artificial Intelligence (ValgrAI), funded by Open Philanthropy, and backed administratively by the Berkeley Existential Risk Initiative (BERI).

Abstract collage featuring a black and white moon, digital glitch textures, text snippets, ocean scene, and geometric shapes in black, white, turquoise, and orange.

Shape the future of AI evaluation.

Apply to join the next cohort.

Get the full picture. Build the missing expertise.

This programme gives you the panorama; the tools, frameworks, and perspectives to connect the dots between machine learning, evaluation, and governance.

You’ll leave ready to:

  • Understand how AI systems are tested and validated.

  • Speak the language of both researchers and policymakers.

  • Join a network building the standards the world will rely on.

Apply Now