Загрузка...

KAIST XAI Tutorial 2024 | Concept-based Explanations for LLMs | Reduan Achtibat (Fraunhofer HHI)

Large Language Models (LLMs) present a significant challenge for Explainable AI (XAI) due to their immense size and complexity. Their sheer scale not only makes them expensive to run and explain but also complicates our ability to fully understand how their components interact. In this talk, we introduce a highly efficient attribution method based on Layer-wise Relevance Propagation that allows us to trace the most important components in these models. Additionally, we can identify which concepts dominate in the residual stream and use this knowledge to influence the generation process. While this is a promising first step, there is still much work ahead to make LLMs more transparent and controllable.

Видео KAIST XAI Tutorial 2024 | Concept-based Explanations for LLMs | Reduan Achtibat (Fraunhofer HHI) канала XAI Open
Яндекс.Метрика

На информационно-развлекательном портале SALDA.WS применяются cookie-файлы. Нажимая кнопку Принять, вы подтверждаете свое согласие на их использование.

Об использовании CookiesПринять