Surfacing Semantic Orthogonality Across Model Safety Benchmarks — Jonathan Bennion
Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open-source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reduction and kmeans clustering (silhouette score: 0.470). We identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self-harm scenarios. Significant differences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps despite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is defined in the future.
Видео Surfacing Semantic Orthogonality Across Model Safety Benchmarks — Jonathan Bennion канала AI Engineer
Видео Surfacing Semantic Orthogonality Across Model Safety Benchmarks — Jonathan Bennion канала AI Engineer
Комментарии отсутствуют
Информация о видео
11 июня 2025 г. 20:40:41
00:26:38
Другие видео канала