Back to Pre-Conference Training Page
AI SecureOps: Attacking & Defending GenAI Applications and Services
Two-Day Interactive (Online) Training - OWASP New Zealand Day 2025
Abstract
Master GenAI security in this immersive CTF-styled workshop. Learn to attack and defend AI systems, exploit vulnerabilities, mitigate LLM threats, and build robust defenses. Gain hands-on skills using real-world scenarios, OWASP LLM Top 10, and MITRE ATLAS. Elevate your AI security expertise today!
Target Audience
- Security professionals seeking to update their skills for the AI era.
- Red & Blue team members.
- AI Developers & Engineers interested in the security aspects of AI and LLM models.
- AI Safety professionals and analysts working on regulations, controls and policies related to AI.
- Product Managers & Founders looking to strengthen their PoVs and models with security best practices.
Course Details
Dates: Tuesday and Wednesday, 2-3 September 2025
Time: 8:45 a.m. to 5:30 p.m. (NZST) each day
Instructor: Abhinav Singh
Course Fee: NZ $900.00 (plus GST and ticketing fees)
Registration Site: https://events.humanitix.com/owaspnz2025-training
Prerequisites - What Students Should Bring and Do Before Class
- A laptop with Internet access
- API key for OpenAI.
- Google Colab account.
- Complete the pre-training setup before the first day. (To be emailed to you before the class starts.)
Student Requirements
- Familiarity with AI and machine learning concepts is beneficial but not required.
- Ability to run python codes and notebooks.
- Familiarity with common GenAI applications like OpenAI.
What Will Students Be Provided With
- One year access to a live interactive playground with various exercises to practice different attack and defense scenarios for GenAI and LLM applications.
- “AI SecureOps” Metal coin for CTF players.
- Complete course guide containing 200+ pages in PDF format. It will contain step-by-step guidelines for all the exercises, labs, and a detailed explanation of concepts discussed during the training.
- PDF versions of slides that will be used during the training.
- Access to Slack channel for continued engagement, support and development.
- Access to Github account for accessing custom-built source codes and tools.
- Access to HuggingFace models, datasets and transformers.
Course Description
Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company’s stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled training in GenAI and LLM security dives into these pressing questions. Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.
By 2026, Gartner, Inc. predicts that over 80% of enterprises will engage with GenAI models, up from less than 5% in 2023. This rapid adoption presents a new challenge for security professionals. To bring you up to speed from intermediate to advanced level, this training provides essential GenAI and LLM security skills through an immersive CTF-styled framework. Delve into sophisticated techniques for mitigating LLM threats, engineering robust defense mechanisms, and operationalizing LLM agents, preparing them to address the complex security challenges posed by the rapid expansion of GenAI technologies. You will be provided with access to a live playground with custom built AI applications replicating real-world attack scenarios covering use-cases defined under the OWASP LLM top 10 framework and mapped with stages defined in MITRE ATLAS. This dense training will navigate you through areas like the red and blue team strategies, create robust LLM defenses, incident response in LLM attacks, implement a Responsible AI(RAI) program and enforce ethical AI standards across enterprise services, with the focus on improving the entire GenAI supply chain. This training will also cover the completely new segment of Responsible AI(RAI), ethics and trustworthiness in GenAI services. Unlike traditional cybersecurity verticals, these unique challenges such as bias detection, managing risky behaviors, and implementing mechanisms for tracking information are going to be the key challenges for enterprise security teams.
By the end of this training, you will be able to:
- Exploit vulnerabilities in AI applications to achieve code and command execution, uncovering scenarios such as cross-site scripting, SQL injection, insecure agent designs, and remote code execution for infrastructure takeover.
- Conduct GenAI red-teaming using adversary simulation, OWASP LLM Top 10, and MITRE ATLAS frameworks, while applying AI security and ethical principles in real-world scenarios.
- Execute and defend against adversarial attacks, including prompt injection, data poisoning, model inversion, and agentic attacks.
- Perform advanced AI red and blue teaming through multi-agent auto-prompting attacks, implementing a 3-way autonomous system consisting of attack, defend and judge models.
- Develop LLM security scanners to detect and protect against injections, jailbreaks, manipulations, and risky behaviors, as well as defending LLMs with LLMs.
- Build and deploy enterprise-grade LLM defenses, including custom guardrails for input/output protection, security benchmarking, and penetration testing of LLM agents.
- Utilize open-source tools like HuggingFace, OpenAI, NeMo, Streamlit, and Garak to build custom GenAI tools and enhance your GenAI development skills.
- Establish a comprehensive LLM SecOps process to secure the supply chain from adversarial attacks and create a robust threat model for enterprise applications.
- Implement an incident response and risk management plan for enterprises developing or using GenAI services.
Why should people attend your course?
- Practical, hands-on labs, simulating real attacks on AI Applications and Implementing Defense controls on applications to measure the effectiveness of controls.
- Focus on technical discussion, attendee engagement through open ended questions, brainstorming and security policy/controls related discussions.
- Continued learning experience since the shared labs are always online with a shared channel of discussion over a dedicated slack channel.
The CTF labs utilizes GenAI in various ways and attendees will get a feel of how to build their own test cases, automations and LLM validators. For example, the CTFs utilize auto evaluation, where the results of jailbreaks and prompt injections are automatically evaluated using a judge LLM. The CTF uses slack to respond to an LLM that controls the workload on the CTF platform.
Your Instructor
Abhinav Singh is an esteemed cybersecurity leader & researcher with over a decade of experience across technology leaders, financial institutions, and as an independent trainer and consultant. Author of “Metasploit Penetration Testing Cookbook” and “Instant Wireshark Starter,” his contributions span patents, open-source tools, and numerous publications. Recognized in security portals and digital platforms, Abhinav is a sought-after speaker & trainer at international conferences like Black Hat, RSA, DEFCON, BruCon and many more, where he shares his deep industry insights and innovative approaches in cybersecurity. He also leads multiple AI security groups at CSA, responsible for coming up with cutting-edge whitepapers and industry reports around safety and security of AI.