- Популярные видео
- Авто
- Видео-блоги
- ДТП, аварии
- Для маленьких
- Еда, напитки
- Животные
- Закон и право
- Знаменитости
- Игры
- Искусство
- Комедии
- Красота, мода
- Кулинария, рецепты
- Люди
- Мото
- Музыка
- Мультфильмы
- Наука, технологии
- Новости
- Образование
- Политика
- Праздники
- Приколы
- Природа
- Происшествия
- Путешествия
- Развлечения
- Ржач
- Семья
- Сериалы
- Спорт
- Стиль жизни
- ТВ передачи
- Танцы
- Технологии
- Товары
- Ужасы
- Фильмы
- Шоу-бизнес
- Юмор
World’s first large multimodal model (LMM) with audio reasoning on a Windows PC
Generative AI and large language models (LLMs) have taken the world by storm, but until recently LLMs have been mostly limited to text inputs. In this MWC 2024 technology demo, we showcase the world’s first large multimodal model (LMM) with audio reasoning on a Windows PC. LLMs can now hear, understanding audio and being able to reason about it.
On a Windows PC, Qualcomm AI Research is showcasing an on-device demonstration of a 7+ billion parameter LMM that can accept text and audio inputs (e.g., music, sound of traffic, etc.) and then generate multi-turn conversations about the audio at a responsive token rate. With our full-stack AI optimization, we achieve high performance at low power. By processing the LMM on device, we achieve enhanced privacy, reliability, personalization, and cost.
Visit the Qualcomm AI Research website
https://www.qualcomm.com/research/artificial-intelligence/ai-research
Develop with the Qualcomm AI Stack
https://www.qualcomm.com/products/technology/artificial-intelligence/ai-stack
Sign up for our newsletter
https://assets.qualcomm.com/mobile-computing-newsletter-sign-up.html
Видео World’s first large multimodal model (LMM) with audio reasoning on a Windows PC канала Qualcomm Research
On a Windows PC, Qualcomm AI Research is showcasing an on-device demonstration of a 7+ billion parameter LMM that can accept text and audio inputs (e.g., music, sound of traffic, etc.) and then generate multi-turn conversations about the audio at a responsive token rate. With our full-stack AI optimization, we achieve high performance at low power. By processing the LMM on device, we achieve enhanced privacy, reliability, personalization, and cost.
Visit the Qualcomm AI Research website
https://www.qualcomm.com/research/artificial-intelligence/ai-research
Develop with the Qualcomm AI Stack
https://www.qualcomm.com/products/technology/artificial-intelligence/ai-stack
Sign up for our newsletter
https://assets.qualcomm.com/mobile-computing-newsletter-sign-up.html
Видео World’s first large multimodal model (LMM) with audio reasoning on a Windows PC канала Qualcomm Research
ai qualcomm ai research artificial intelligence machine learning qualcomm snapdragon aimet ai model efficiency toolkit quantization full-stack optimization generative AI edge ai on-device ai generative ai gen ai LLM ai assistant int4 lmm large multimodal model multimodal llm multimodal ai mwc2024 windows on arm LMM with audio reasoning
Комментарии отсутствуют
Информация о видео
26 февраля 2024 г. 10:03:10
00:01:15
Другие видео канала





















