Your Go-To List for AI Red Teaming and ML Security Resources

AI security is moving fast, and staying ahead of the curve means sharpening both your theoretical knowledge and your hands-on skills. Whether you’re getting started with AI red teaming or looking to push your skills into adversarial ML research, this post has you covered.
I’ve organized the resources into three categories: Training Courses, CTF-Style Platforms, and General ML/AI Materials.
Let’s dive in.
🔥 AI Red Teaming Training
These are the places to start if you want a structured approach. Some are video-based, others are full-blown hands-on labs, all focused on understanding and exploiting AI and LLM systems.
| Title | Description | Source | Cost |
| The Ultimate AILLM Penetration Testing Training Course | A mix of theory and labs focused on finding and exploiting vulnerabilities in AI and LLM applications. | Udemy | Paid (Available with Udemy-Enterprise account) |
| A Deep Dive into LLM Red Teaming | Teaches how to attack and defend LLMs using real-world offensive techniques. | Udemy | Paid (Available with Udemy-Enterprise account) |
| AI Red Teaming 101 | Beginner-friendly YouTube series introducing AI red teaming fundamentals and risk assessment. | YouTube | Free |
| AI Red Teaming (Microsoft) | Microsoft’s official guide to AI red teaming and security testing methodologies. | Microsoft | Free |
| HTB Academy – AI Red Teamer Path | Guided learning path for practical AI red teaming techniques on HTB Academy. | Hack the Box | Paid |
If you’re just getting started, AI Red Teaming 101 and Microsoft’s guide are great free options. If you already live and breathe pentesting, the HTB and Udemy tracks are well-structured affordable resources.
🕹️ AI Red Teaming CTFs & Hands-On Platforms
This is where the real fun begins, breaking chatbots, evading filters, and leaking sensitive data in a controlled environment.
| Title | Description | Cost |
| Gandalf (Lakera) | Interactive LLM security game focused on prompt injection challenges. | Free |
| Prompt Airlines | Fun, gamified prompt injection challenges to test your creativity against LLMs. | Free |
| PortSwigger LLM Attacks | Part of PortSwigger’s Web Security Academy — focuses on attacking LLMs in web apps. | Free |
| AI and ML Exploitation Track (HTB) | Hack The Box track on exploiting AI and ML systems in real-world scenarios. | Paid/Free |
| Crucible Dreadnode | A CTF platform dedicated to machine learning challenges, with tutorials for attacks. | Free |
| PromptMe | Community-maintained collection of prompt injections and jailbreaks for testing. | Free |
| Microsoft AI Red Teaming Playground Labs | Hands-on labs to practice the techniques from Microsoft’s AI Red Teaming guide. | Free |
| GPT Prompt Attack (GPA) | Lightweight web game for learning prompt injection attacks. | Free |
| Breach the Perimeter via Prompt Injection | In this fun lab, students will learn how prompt‐injection attacks can extract secrets from AI assistants and the dangers of leaking SAS tokens and service-principal credentials. | Paid |
If you only pick one, start with Gandalf, it’s addictive and great for sharpening your creativity with prompts. For something closer to real-world attack chains, HTB’s AI track and Crucible Dreadnode are solid bets.
📚 ML/AI General Resources
If you’re serious about AI security, you need to understand how models work under the hood. These resources are not security-specific, but they build the foundation you’ll need for adversarial ML research.
| Title | Description | Source | Cost |
| Andrew Ng's Machine Learning Specialization | A great starting point for machine learning fundamentals. | Coursera | Free (without Certficate of completion) |
| Ollama Course – Build AI Apps Locally | Learn how to set up and use Ollama to build powerful AI applications locally. This hands-on course covers pulling and customizing models, REST APIs, Python integrations. | Freecodecamp.org - Youtube Channel | Free |
| The AI Chatbot Handbook – How to Build an AI Chatbot with Redis, Python, and GPT | This tutorial will take you through the process of building an AI chatbot | Freecodecamp.org Blog | Free |
✅ Wrapping Up
AI security is still a new frontier, which makes it fun and unpredictable. Whether you’re starting with free guides or jumping straight into CTFs, there’s no shortage of ways to level up.
If you’re in it for the long run, don’t just learn how to break models, learn how they work. The better you understand them, the better you’ll be at finding those edge cases where they fail.
Happy hacking!
~ Ragab0t


