Enabling PyTorch’s Thousand Ops for Software First Silicon Design
Keynote: Enabling PyTorch’s Thousand Ops for Software First Silicon Design - Andrew Ling, Head of ML Compilers, Groq PyTorch has become the lingua franca of machine learning—its flexibility and expressiveness have fueled a decade of innovation across academia and industry. It is because PyTorch has enabled flexibility, has its popularity grown and enabled some of the best model development in the industry. While most users interact with only a few hundred core operations, the total number of supported ops is much larger and is supported by layers of lowerings, vendor rewrites, and kernel-specific fusions. In an era where access to flops and compute has become the primary bottleneck to scaling machine learning, we must rethink the relationship between software and hardware. This talk makes the case that PyTorch sits at a critical inflection point. With its ecosystem maturity and broad adoption, PyTorch can not only simplify itself but also drive the next generation of silicon design—one that preserves flexibility and simplicity to machine learning and its stack.