Загрузка...

Prompt Injection | Episode 2

🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits –
https://poweredbybhis.com

In this episode of the AI SecOps podcast, the hosts dive into prompt injection attacks—malicious inputs designed to manipulate large language models (LLMs) and bypass intended behavior.

They explain how these attacks work, the risks they pose, and why defending against them is critical for AI security. The discussion highlights Prompt Guard, a specialized tool trained to detect and block such attacks by analyzing user input before it reaches the model.

Emphasizing the importance of strong guardrails, the episode underscores the need for proactive measures and collaboration to ensure AI systems remain secure and trustworthy.

----------------------------------------------------------------------------------------------
About Joff Thyer - blackhillsinfosec.com/team/joff-thyer/
About Derek Banks - https://www.blackhillsinfosec.com/team/derek-banks/
About Brian Fehrman - https://www.blackhillsinfosec.com/team/brian-fehrman/
About Bronwen Aker - http://blackhillsinfosec.com/team/bronwen-aker/
About Ben Bowman - https://www.blackhillsinfosec.com/team/ben-bowman/

Видео Prompt Injection | Episode 2 канала AI Security Ops
Яндекс.Метрика

На информационно-развлекательном портале SALDA.WS применяются cookie-файлы. Нажимая кнопку Принять, вы подтверждаете свое согласие на их использование.

Об использовании CookiesПринять