Adversarial Threats Across the ML Lifecycle: A Red Team Perspective
🎥 From the MLOps World | GenAI Summit 2025 — Virtual Session (October 7, 2025) Session Title: Adversarial Threats Across the ML Lifecycle: A Red Team Perspective Speaker: Sanket Badhe, Senior Machine Learning Engineer, TikTok Talk Track: ML Lifecycle Security Abstract: As machine learning systems become deeply integrated into critical domains — from finance and healthcare to content moderation and national security — their attack surface expands across the entire ML lifecycle. In this talk, Sanket Badhe presents a red team perspective on adversarial threats targeting every stage of the ML pipeline: - Data poisoning during collection and labeling - Model theft and evasion attacks in deployment - Manipulation of feedback loops post-launch Drawing from real-world case studies and research, Sanket demonstrates how adversaries exploit blind spots in MLOps workflows and provides a structured threat model with actionable defense strategies. What you’ll learn: - ML systems are vulnerable at every stage of the lifecycle (data, training, deployment, feedback) - How adversarial threats differ — from data poisoning to prompt injection - Why ML red teaming requires unique tools and methods beyond traditional security testing - How feedback loops and data pipelines become high-risk targets - Why proactive, continuous security is essential — not an afterthought - Key monitoring, validation, and isolation mechanisms to harden your ML systems - How cross-functional collaboration between ML, security, and DevOps teams mitigates systemic risk.
