Skip to main content

Command Palette

Search for a command to run...

Your Go-To List for AI Red Teaming and ML Security Resources

Updated
4 min read
Your Go-To List for AI Red Teaming and ML Security Resources

AI security is moving fast, and staying ahead of the curve means sharpening both your theoretical knowledge and your hands-on skills. Whether you’re getting started with AI red teaming or looking to push your skills into adversarial ML research, this post has you covered.

I’ve organized the resources into three categories: Training Courses, CTF-Style Platforms, and General ML/AI Materials.

Let’s dive in.


🔥 AI Red Teaming Training

These are the places to start if you want a structured approach. Some are video-based, others are full-blown hands-on labs, all focused on understanding and exploiting AI and LLM systems.

TitleDescriptionSourceCost
The Ultimate AILLM Penetration Testing Training CourseA mix of theory and labs focused on finding and exploiting vulnerabilities in AI and LLM applications.UdemyPaid (Available with Udemy-Enterprise account)
A Deep Dive into LLM Red TeamingTeaches how to attack and defend LLMs using real-world offensive techniques.UdemyPaid (Available with Udemy-Enterprise account)
AI Red Teaming 101Beginner-friendly YouTube series introducing AI red teaming fundamentals and risk assessment.YouTubeFree
AI Red Teaming (Microsoft)Microsoft’s official guide to AI red teaming and security testing methodologies.MicrosoftFree
HTB Academy – AI Red Teamer PathGuided learning path for practical AI red teaming techniques on HTB Academy.Hack the BoxPaid

If you’re just getting started, AI Red Teaming 101 and Microsoft’s guide are great free options. If you already live and breathe pentesting, the HTB and Udemy tracks are well-structured affordable resources.


🕹️ AI Red Teaming CTFs & Hands-On Platforms

This is where the real fun begins, breaking chatbots, evading filters, and leaking sensitive data in a controlled environment.

TitleDescriptionCost
Gandalf (Lakera)Interactive LLM security game focused on prompt injection challenges.Free
Prompt AirlinesFun, gamified prompt injection challenges to test your creativity against LLMs.Free
PortSwigger LLM AttacksPart of PortSwigger’s Web Security Academy — focuses on attacking LLMs in web apps.Free
AI and ML Exploitation Track (HTB)Hack The Box track on exploiting AI and ML systems in real-world scenarios.Paid/Free
Crucible DreadnodeA CTF platform dedicated to machine learning challenges, with tutorials for attacks.Free
PromptMeCommunity-maintained collection of prompt injections and jailbreaks for testing.Free
Microsoft AI Red Teaming Playground LabsHands-on labs to practice the techniques from Microsoft’s AI Red Teaming guide.Free
GPT Prompt Attack (GPA)Lightweight web game for learning prompt injection attacks.Free
Breach the Perimeter via Prompt InjectionIn this fun lab, students will learn how prompt‐injection attacks can extract secrets from AI assistants and the dangers of leaking SAS tokens and service-principal credentials.Paid

If you only pick one, start with Gandalf, it’s addictive and great for sharpening your creativity with prompts. For something closer to real-world attack chains, HTB’s AI track and Crucible Dreadnode are solid bets.


📚 ML/AI General Resources

If you’re serious about AI security, you need to understand how models work under the hood. These resources are not security-specific, but they build the foundation you’ll need for adversarial ML research.

TitleDescriptionSourceCost
Andrew Ng's Machine Learning SpecializationA great starting point for machine learning fundamentals.CourseraFree (without Certficate of completion)
Ollama Course – Build AI Apps LocallyLearn how to set up and use Ollama to build powerful AI applications locally. This hands-on course covers pulling and customizing models, REST APIs, Python integrations.Freecodecamp.org - Youtube ChannelFree
The AI Chatbot Handbook – How to Build an AI Chatbot with Redis, Python, and GPTThis tutorial will take you through the process of building an AI chatbotFreecodecamp.org BlogFree

✅ Wrapping Up

AI security is still a new frontier, which makes it fun and unpredictable. Whether you’re starting with free guides or jumping straight into CTFs, there’s no shortage of ways to level up.

If you’re in it for the long run, don’t just learn how to break models, learn how they work. The better you understand them, the better you’ll be at finding those edge cases where they fail.

Happy hacking!

~ Ragab0t