
What we’re about
Are you curious (and maybe a little worried) about what happens when AI systems move from “chatting” to doing? Whether you’re a security engineer, CISO, developer, researcher, auditor, or an AI builder who wants to ship safely, AI Security Community is your place.
We’re a practitioner-led meetup focused on the rapidly evolving world of LLM, agent, and GenAI security—from prompt injection and data leakage to tool misuse, supply-chain risks, model governance, and runtime guardrails. Our sessions are designed to be high-signal, hands-on, and grounded in real incidents and real defenses.
Join us for regular talks, workshops, and community demos where we share what’s working (and what isn’t) when securing AI in production. Come to learn, collaborate, and meet others building the next generation of safe AI systems.
Meetup Features:
- Expert Talks: Hear from security leaders and AI practitioners on emerging threats, defenses, and best practices for GenAI and agentic systems.
- Hands-On Workshops: Learn practical techniques—threat modeling AI apps, red teaming agents, securing RAG pipelines, and implementing guardrails.
- Case Studies & War Stories: Break down real-world failures and incident patterns (and how teams actually fixed them).
- Tooling & Research Demos: See new security tools, open-source projects, and research—tested by practitioners.
- Networking: Connect with builders and defenders working on AI security, governance, and compliance across industries.
Whether you’re securing enterprise AI deployments or building AI-native products, you’ll find a welcoming community that’s focused on learning fast, sharing openly, and raising the bar for AI security together.
Past events
84



