Securing the AI Frontier: Unmasking LLM and RAG Vulnerabilities
Join us to understand the security risks in LLMs and RAG architectures, learn about current attack methods, and discover how red teaming helps build more robust AI systems.
🎧 Listen to this Episode
Show Notes
Large language models present new security challenges, especially when they leverage external data sources through Retrieval Augmented Generation (RAG) architectures . This podcast explores the unique attack techniques that exploit these systems, including indirect prompt injection and RAG poisoning. We delve into how offensive testing methods like AI red teaming are crucial for identifying and addressing these critical vulnerabilities in the evolving AI landscape.
www.securitycareers.help/navigating-the-ai-frontier-a-cisos-perspective-on-securing-generative-ai/
www.hackernoob.tips/the-new-frontier-how-were-bending-generative-ai-to-our-will
Share this episode
Enjoying CISO Insights?
Subscribe to get new episodes delivered directly to your podcast app.
Related Episodes
The 2026 Compliance Countdown: Navigating the New Era of Global Privacy and Cyber Regulations
This episode breaks down the unprecedented wave of global privacy and cybersecurity mandates hitting in 2026, guiding organizations through the critical shift from drafting written policies to providi...
▶️ Listen Now
Beyond the Perimeter: Inside the Cloud Threat Landscape
This episode provides a comprehensive overview of evolving cloud threats, highlighting how adversaries weaponize legitimate cloud tools, identities, and artificial intelligence services to compromise ...
▶️ Listen Now
Gloves Off: Operation Epic Fury and the Trump Administration 2026 Cyber Strategy
This podcast explores how the United States is redefining modern warfare and digital defense through kinetic military campaigns in the Middle East and a bold new cyber doctrine that empowers the priva...
▶️ Listen Now