In this masterclass, you’ll learn the methods for training and fine-tuning Large Language Models, including Supervised Fine-Tuning and Reinforcement Learning.

You’ll also learn why RL methods are essential to get the best out of an LLM, and which of these methods have succeeded the most.

 

📅  February 6, 2026 

⏰ 12: 00 pm ET / 9:00 am PT 

(find your local time here)

 

Register to join us 

After signing up you will receive invitations to upcoming programs and helpful resources. You can unsubscribe anytime.

 

If you can't join live, go ahead and register and we'll send you the recording. 

00

DAYS

00

HOURS

00

MINS

00

SECS

What will you learn?

  • Have a clear mental model of the LLM training stack
    (Pretraining → Supervised Fine-Tuning → Reinforcement Learning / Preference Optimization).

  • See concrete failure modes of “SFT only” that RL directly addresses.

  • Understand what RL + preference optimization optimize that cross-entropy does not.

  • Recognize when you should consider RL for your own projects.

  • Be ready to dive deeper into applying RL methods to fine-tune LLMs.

 

Who is this masterclass for?

ML/LLM engineers, practitioners, and enthusiasts, who want to understand when and why RLHF-style methods are worth the effort, and what it takes to apply them.

Led by some of the most renowned AI educators in the world

(aka, "The RAG pack")

 

Jay Alammar is a machine learning researcher and writer, co-author of Hands-On Large Language Models: Language Understanding and Generation, whose illustrated articles have helped millions visually understand transformers and modern NLP.

Maarten Grootendorst is a data scientist and creator of popular NLP libraries like BERTopic and KeyBERT, and co-author of Hands-On Large Language Models: Language Understanding and Generation, bridging cutting-edge research with practical tools.

Chris McCormick is a leading AI educator and researcher whose deep-dive tutorials on BERT, transformers, and NLP have become go-to references for practitioners worldwide, combining rigorous understanding with clear, implementation-ready code.

Luis Serrano is an ex-Google, ex-Apple AI scientist, educator, and author of Grokking Machine Learning, dedicated to making complex ideas intuitive and accessible through the Serrano Academy platform.

Josh Starmer is the founder of StatQuest and author of The StatQuest Illustrated Guide to Machine Learning and The StatQuest Illustrated Guide to Neural Networks and AI, known for turning intimidating concepts into clear, joyful explanations.

Don't miss the opportunity to learn from this group, ask your questions, and kick off 2026 with the latest in reinforcement learning.

 

Register below

 

After signing up you will receive invitations to upcoming programs and helpful resources. You can unsubscribe anytime.

RAGPACK.ai