- Популярные видео
- Авто
- Видео-блоги
- ДТП, аварии
- Для маленьких
- Еда, напитки
- Животные
- Закон и право
- Знаменитости
- Игры
- Искусство
- Комедии
- Красота, мода
- Кулинария, рецепты
- Люди
- Мото
- Музыка
- Мультфильмы
- Наука, технологии
- Новости
- Образование
- Политика
- Праздники
- Приколы
- Природа
- Происшествия
- Путешествия
- Развлечения
- Ржач
- Семья
- Сериалы
- Спорт
- Стиль жизни
- ТВ передачи
- Танцы
- Технологии
- Товары
- Ужасы
- Фильмы
- Шоу-бизнес
- Юмор
MIT Discovers AI’s “Secret to Smartness”: Recursive Thinking Mimics Human Revision
MIT Discovers AI’s “Secret to Smartness”: Recursive Thinking Mimics Human Revision
Have you ever asked an AI to summarize a long document, only to find it loses track of earlier content? Or received an answer that completely misses the point? This phenomenon, known as “context corruption,” highlights a key weakness in large language models: the longer the input, the poorer their performance.
On the last day of 2025, researchers at MIT released a groundbreaking paper titled “Recursive Language Models,” proposing a novel solution. Instead of scaling model size or computing power, they teach AI to “think recursively”—allowing it to revisit and refine its own work, leading to dramatic improvements.
Key Insight: Revision Drives Accuracy
The study finds that most AI errors stem not from a lack of knowledge, but from hasty first drafts. When models are allowed to recursively process complex tasks 2–4 times, accuracy rises by 10–25%. In tests involving documents exceeding 10 million tokens, the Recursive Language Model (RLM) built on GPT-5 maintained stable performance, while conventional models failed entirely.
How It Works: External Memory and Self-Calling
RLM reimagines how AI handles information. Traditional models attempt to load entire documents into a limited “workspace” (context window), leading to overload. RLM stores long texts externally, enabling the model to write code that retrieves only relevant segments—searching keywords, summarizing sections, comparing content, and even invoking “copies” of itself to handle subtasks in parallel. This equips AI with an intelligent search engine and expandable external memory, allowing focused reading instead of futile memorization.
Striking Experimental Results
On the OOLONG benchmark, which requires deep comprehension of long documents, RLM boosted GPT-5’s accuracy from 44% to 56.5%. In code-related QA, accuracy jumped from 24% to 62%. Even with inputs exceeding 10 million tokens, RLM remained robust, whereas standard models broke down. Thanks to selective reading, RLM also reduced processing costs compared to direct large-model inference.
Implications: From Scaling to Smarter Thinking
The research suggests that advancing AI may rely less on increasing parameters and more on improving reasoning strategies. Just as humans refine drafts and debug code through iteration, AI can dramatically enhance output via recursive self-review. The team notes that future optimizations—such as asynchronous calls and deeper recursion—could open a new path for AI development, shifting the focus from sheer scale to efficient cognition.
https://arxiv.org/pdf/2512.24601
Видео MIT Discovers AI’s “Secret to Smartness”: Recursive Thinking Mimics Human Revision канала AI Application (paper summaries or stories)
Have you ever asked an AI to summarize a long document, only to find it loses track of earlier content? Or received an answer that completely misses the point? This phenomenon, known as “context corruption,” highlights a key weakness in large language models: the longer the input, the poorer their performance.
On the last day of 2025, researchers at MIT released a groundbreaking paper titled “Recursive Language Models,” proposing a novel solution. Instead of scaling model size or computing power, they teach AI to “think recursively”—allowing it to revisit and refine its own work, leading to dramatic improvements.
Key Insight: Revision Drives Accuracy
The study finds that most AI errors stem not from a lack of knowledge, but from hasty first drafts. When models are allowed to recursively process complex tasks 2–4 times, accuracy rises by 10–25%. In tests involving documents exceeding 10 million tokens, the Recursive Language Model (RLM) built on GPT-5 maintained stable performance, while conventional models failed entirely.
How It Works: External Memory and Self-Calling
RLM reimagines how AI handles information. Traditional models attempt to load entire documents into a limited “workspace” (context window), leading to overload. RLM stores long texts externally, enabling the model to write code that retrieves only relevant segments—searching keywords, summarizing sections, comparing content, and even invoking “copies” of itself to handle subtasks in parallel. This equips AI with an intelligent search engine and expandable external memory, allowing focused reading instead of futile memorization.
Striking Experimental Results
On the OOLONG benchmark, which requires deep comprehension of long documents, RLM boosted GPT-5’s accuracy from 44% to 56.5%. In code-related QA, accuracy jumped from 24% to 62%. Even with inputs exceeding 10 million tokens, RLM remained robust, whereas standard models broke down. Thanks to selective reading, RLM also reduced processing costs compared to direct large-model inference.
Implications: From Scaling to Smarter Thinking
The research suggests that advancing AI may rely less on increasing parameters and more on improving reasoning strategies. Just as humans refine drafts and debug code through iteration, AI can dramatically enhance output via recursive self-review. The team notes that future optimizations—such as asynchronous calls and deeper recursion—could open a new path for AI development, shifting the focus from sheer scale to efficient cognition.
https://arxiv.org/pdf/2512.24601
Видео MIT Discovers AI’s “Secret to Smartness”: Recursive Thinking Mimics Human Revision канала AI Application (paper summaries or stories)
Комментарии отсутствуют
Информация о видео
4 января 2026 г. 22:57:43
00:03:26
Другие видео канала




















