All courses
AIComing 2026

AI Safety for Scientists

Most AI safety content is written by CS people for CS people. This course approaches the problem from the scientist's perspective — what do frontier models actually know, where do they fail, and what does that mean for CBRN risk?

About this course

The AI safety field needs domain experts — chemists, biologists, engineers — who can evaluate model capabilities in their areas. Right now, most of that work is being done by people without that expertise. This is a gap.

This course gives working scientists the conceptual tools to contribute to AI safety evaluation — understanding red-teaming, CBRN risk frameworks, and how to think rigorously about what frontier models do and don't know.

What you'll cover

  • 1How LLMs encode and retrieve scientific knowledge
  • 2Red-teaming frameworks: what counts as meaningful uplift?
  • 3CBRN risk: chemical, biological, radiological, nuclear threat models
  • 4Adversarial evaluation design and prompt taxonomy
  • 5Regulatory landscape: AI Act, EO 14110, and biosecurity frameworks

Who this is for

Audience

Working scientists, policy researchers, and technical professionals

Prerequisites

Working knowledge of a scientific domain. No coding required.

Your instructor

Kevin Braza

PhD Candidate, UC Davis · CBRN AI Safety Scientist

Chemical engineering PhD candidate, Quantic MBA, former boarding school faculty, and CBRN AI safety scientist at Reinforce Labs. Previously UCSB B.S. Chemistry, Harvard/Amgen, and IB + AP classroom instructor. Teaches at the intersection of chemistry, AI, and systems thinking.

Frequently asked

Do I need a computer science background?

No. The course is designed for domain scientists who want to apply their expertise to AI safety — not for people trying to get into ML.

Is this relevant for policy work?

Yes. The CBRN and regulatory modules are directly applicable to policy analysis and biosecurity research.

What's your background for teaching this?

I evaluate frontier AI models for CBRN risk professionally at Reinforce Labs and have worked with xAI and OpenAI on adversarial evaluation. This course is drawn directly from that work.

$997

One-time · Lifetime access

Includes

  • 5 video modules + live Q&A sessions
  • Evaluation framework templates
  • Lifetime access
  • Certificate of completion
FormatSelf-paced video + live Q&A
LevelIntermediate–Advanced