AI Oct 22, 2025

PyTorch and Democratization of AI Accelerators

Sponsored Session: Lightning Talk: PyTorch and Democratization of AI Accelerators - Hong-Seok Kim, Rebellions PyTorch, much like other influential open-source projects such as Linux and GCC, has always prioritized broad hardware support. This flexibility is a cornerstone of its design, enabled by key features like its dispatch engine, device runtime, extensible graph mode, torch.distributed backend, and Out-of-Tree (OOT) support. This architecture allows PyTorch to seamlessly integrate not only existing x86 CPUs and GPUs, but also a diverse range of emerging AI accelerators. At Rebellions, we’ve embraced this trend. In 2024, we began actively developing and distributing the torch-rlbn package to efficiently leverage our AI accelerators within PyTorch. This package optimizes the performance of Rebellion AI Accelerators by utilizing PyTorch’s graph mode, eager mode, distributed training backend, and OOT methodology. In this talk, we’ll share our experiences on PyTorch’s HW portability layer. We’ll dive into the technical challenges posed by the efficiency-focused AI accelerators and how we have overcome them. Furthermore, we’ll demonstrate how we built a developer-friendly LLM serving stack by leveraging other components of the PyTorch ecosystem, such as Triton and vLLM. Through this talk, attendees will gain a deeper understanding of PyTorch’s crucial role in democratizing access to AI accelerators. We also hope to provide valuable technical insights for other AI accelerator teams looking to enter the PyTorch ecosystem.