GPT in Vanilla JavaScript
We’ll demystify LLM internals for web developers using a real LLM implemented entirely in vanilla JavaScript that runs in the browser. No Ph.D. needed!
00:00 - Introduction & Background 00:39 - The Power of “View Source” for Learning 01:39 - Bringing “View Source” to AI: GPT-2 in the Browser 02:29 - Live Demo: Running GPT-2 Locally in JavaScript 03:53 - Why GPT-2? Model Lineage & Simplicity 04:55 - Large Language Models as Autocomplete Engines 05:43 - Tokenization Explained 07:05 - Subword Tokenization & Vocabulary Compression 07:35 - Embeddings: Mapping Words to Math 09:07 - Semantic Relationships in Embedding Space 10:43 - How Embeddings Are Learned 11:39 - Using Embeddings in the Model 12:46 - Attention Mechanism Overview 13:38 - Contextualizing Words with Attention 15:33 - The Perceptron: Making Predictions 17:20 - From Embeddings to Predicted Tokens 18:28 - Calculating Token Probabilities & Softmax 20:21 - Full Process Recap 21:12 - Learning AI: Mastery is Within Reach 22:03 - “View Source” Principles for AI Workflows 22:50 - Demo: Spreadsheets Are All You Need Notebooks 24:48 - Resources & Getting Started
