PyTorch Afters: Efficient World Model and LLM Training on Blackwell Infrastructure

Date: 21st October 2025

Location: San Francisco, California

Kickoff at the PyTorch Conference

Whether you’re attending the conference or simply based in the SF Bay Area, join us as we dive into the future of AI infrastructure and its role in advancing reinforcement learning, large-scale inference, and media/world models.

At this session, the DataCrunch team, alongside leading frontier AI labs (TBA), will share hard-won insights from building and scaling systems at the edge of what’s possible. You’ll also get an exclusive first look at the B300 and GB300 NVL-72 systems, and a glimpse of what’s next in AI infrastructure.

Expect deep technical discussions, practical learnings from practitioners, and the chance to connect with like-minded engineers. Stick around to enjoy food, drinks, and sharp conversations.

Speakers

  • ​Paul Chang - ML Engineer at DataCrunch
  • Erik Schultheis - Postdoctoral Researcher at IST Autria

Agenda

  • ​5:30pm – Arrival

  • ​6:00pm – Talks + Q&A

  • ​7:00pm – Networking, food, & drinks

  • ​9:00pm – Wrap-up

​Note: Pacific Time (GMT-7)

Talks

Talk 1: Training world models using B200s by Paul Chang

Talk 2: Training quantized LLMs efficiently on consumer GPUs by Erik Schultheis

Who Should Join?

  • ​AI researchers

  • ​ML engineers

  • ​Technical founders

  • ​AI product managers

​This event is for those staying ahead of the curve with AI infra, optimization techniques, and production-grade systems at scale.

Other hosts and speakers (TBA)

→ Sign up here

5 Likes

Friendly reminder: Our on-site Pytorch kickoff event in San Francisco starts tomorrow at 5:30pm (local time).

If you haven’t managed to get a ticket, here is your last chance as spots are limited. If you can’t make it, please cancel your registration so someone else can join. See you then! :raising_hands:

→ Sign up