Post Event Q&A - PyTorch Afters: Efficient World Model and LLM Training on Blackwell Infrastructure

What did we cover?
We explored how next-generation AI infrastructure is shaping the future of reinforcement learning, large-scale inference, and world models.

Who spoke?

  • Paul Chang (DataCrunch): Training World Models Using B200s

  • Erik Schultheis (IST Austria): Training Quantized LLMs Efficiently on Consumer GPUs

:thought_balloon: Any questions, thoughts, or takeaways from the session?
Drop them below :down_arrow: we’d love to hear your perspective and keep the discussion going.