EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?
Guest:
• Alex Polyakov (https://www.linkedin.com/in/alex-polyakov-cyber/) , CEO at Adversa AI (https://adversa.ai/)
Topics:
• Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client?
• Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now?
• What trips most clients, classic security mistakes in AI systems or AI-specific mistakes?
• Are there truly new mistakes in AI systems or are they old mistakes in new clothing?
• I know it is not your job to fix it, but much of this is unfixable, right?
• Is it a good idea to use AI to secure AI?
Resources:
• EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far (https://cloud.withgoogle.com/cloudsecurity/podcast/ep84-how-to-secure-artificial-intelligence-ai-threats-approaches-lessons-so-far/)
• AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi (https://adversa.ai/blog/ai-red-teaming-reasoning-llm-jailbreak-china-deepseek-qwen-kimi/)
• Adversa AI blog (https://adversa.ai/topic/trusted-ai-blog/)
• Oops! 5 serious gen AI security mistakes to avoid (https://cloud.google.com/transform/oops-5-serious-gen-AI-security-mistakes-to-avoid/)
• Generative AI Fast Followership: Avoid These First Adopter Security Missteps (https://www.googlecloudcommunity.com/gc/Community-Blog/Generative-AI-Fast-Followership-Avoid-These-First-Adopter/ba-p/849659)
Видео EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? канала Anton Chuvakin
Cloud Security Podcast by Google
• Alex Polyakov (https://www.linkedin.com/in/alex-polyakov-cyber/) , CEO at Adversa AI (https://adversa.ai/)
Topics:
• Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client?
• Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now?
• What trips most clients, classic security mistakes in AI systems or AI-specific mistakes?
• Are there truly new mistakes in AI systems or are they old mistakes in new clothing?
• I know it is not your job to fix it, but much of this is unfixable, right?
• Is it a good idea to use AI to secure AI?
Resources:
• EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far (https://cloud.withgoogle.com/cloudsecurity/podcast/ep84-how-to-secure-artificial-intelligence-ai-threats-approaches-lessons-so-far/)
• AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi (https://adversa.ai/blog/ai-red-teaming-reasoning-llm-jailbreak-china-deepseek-qwen-kimi/)
• Adversa AI blog (https://adversa.ai/topic/trusted-ai-blog/)
• Oops! 5 serious gen AI security mistakes to avoid (https://cloud.google.com/transform/oops-5-serious-gen-AI-security-mistakes-to-avoid/)
• Generative AI Fast Followership: Avoid These First Adopter Security Missteps (https://www.googlecloudcommunity.com/gc/Community-Blog/Generative-AI-Fast-Followership-Avoid-These-First-Adopter/ba-p/849659)
Видео EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? канала Anton Chuvakin
Cloud Security Podcast by Google
Показать
Комментарии отсутствуют
Информация о видео
31 марта 2025 г. 21:23:43
00:23:11
Другие видео канала




















