Do Androids Know They’re Only Dreaming of Electric Sheep?
Date Presented: 3/18/2024
Speaker: Sky Wang, Columbia University
Abstract: We design probes trained on the internal representations of a transformer language model that are predictive of its hallucinatory behavior on in-context generation tasks. To facilitate this detection, we create a span-annotated dataset of organic and synthetic hallucinations over several tasks. We find that probes trained on the force-decoded states of synthetic hallucinations are generally ecologically invalid in organic hallucination detection. Furthermore, hidden state information about hallucination appears to be task and distribution-dependent. Intrinsic and extrinsic hallucination saliency varies across layers, hidden state types, and tasks; notably, extrinsic hallucinations tend to be more salient in a transformer’s internal representations. Outperforming multiple contemporary baselines, we show that probing is a feasible and efficient alternative to language model hallucination evaluation when model states are available.
Speaker's Bio: Sky is a Ph.D. candidate in Computer Science at Columbia University advised by Zhou Yu and Smaranda Muresan. His research primarily revolves around Natural Language Processing (NLP), with broad interests in the area where NLP meets Computational Social Science (CSS). Here, his research primarily revolves around three major areas: (1) revealing and designing for social difference and inequality, (2) cross-cultural NLP, and (3) mechanistic interpretability. His research is supported by a NSF Graduate Research Fellowship and has received two outstanding paper awards at EMNLP. He has previously been an intern at Microsoft Semantic Machines, Google Research, and Amazon AWS AI.
Видео Do Androids Know They’re Only Dreaming of Electric Sheep? канала USC Information Sciences Institute
Speaker: Sky Wang, Columbia University
Abstract: We design probes trained on the internal representations of a transformer language model that are predictive of its hallucinatory behavior on in-context generation tasks. To facilitate this detection, we create a span-annotated dataset of organic and synthetic hallucinations over several tasks. We find that probes trained on the force-decoded states of synthetic hallucinations are generally ecologically invalid in organic hallucination detection. Furthermore, hidden state information about hallucination appears to be task and distribution-dependent. Intrinsic and extrinsic hallucination saliency varies across layers, hidden state types, and tasks; notably, extrinsic hallucinations tend to be more salient in a transformer’s internal representations. Outperforming multiple contemporary baselines, we show that probing is a feasible and efficient alternative to language model hallucination evaluation when model states are available.
Speaker's Bio: Sky is a Ph.D. candidate in Computer Science at Columbia University advised by Zhou Yu and Smaranda Muresan. His research primarily revolves around Natural Language Processing (NLP), with broad interests in the area where NLP meets Computational Social Science (CSS). Here, his research primarily revolves around three major areas: (1) revealing and designing for social difference and inequality, (2) cross-cultural NLP, and (3) mechanistic interpretability. His research is supported by a NSF Graduate Research Fellowship and has received two outstanding paper awards at EMNLP. He has previously been an intern at Microsoft Semantic Machines, Google Research, and Amazon AWS AI.
Видео Do Androids Know They’re Only Dreaming of Electric Sheep? канала USC Information Sciences Institute
Комментарии отсутствуют
Информация о видео
19 марта 2024 г. 3:16:43
00:58:34
Другие видео канала