Загрузка страницы

Scott and Mark learn responsible AI | BRK329

Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.

To learn more, please check out these resources:
* https://aka.ms/TCL/MicroftSecurity
* https://learn.microsoft.com/en-us/security/
* https://aka.ms/IgniteAITools
* https://aka.ms/Ignite24Plan-SecureDatawithAI
𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
* Mark Russinovich
* Scott Hanselman
𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
This is one of many sessions from the Microsoft Ignite 2024 event. View even more sessions on-demand and learn about Microsoft Ignite at https://ignite.microsoft.com

BRK329 | English (US) | Security
#MSIgnite

Видео Scott and Mark learn responsible AI | BRK329 канала Microsoft Events
Advanced (300), BRK329, Breakout, English (US), Mark Russinovich, Scott Hanselman, Scott and Mark learn responsible AI | BRK329, Security, Security-Curated, Software Company, Technical, Version v0, ignite, ignite 2024, m6x0, microsoft, microsoft ignite, microsoft ignite 2024, ms ignite, ms ignite 2024, msft ignite, msft ignite 2024
Показать
Страницу в закладки Мои закладки
Все заметки Новая заметка Страницу в заметки