Exploring Human and Neural Attention on Source Code: Insights and Applications
In the rapidly evolving landscape of AI-driven software engineering, understanding how neural models perceive code has become paramount. This talk delves into the fascinating commonalities and differences between human and neural attention in code-related tasks. In particular, we compare the reasoning of skilled developers with the attention mechanisms of neural models, including even recent Large Language Models (LLM), on tasks like code summarization, bug fixing and sense-making.
The results uncover correlations and divergences, shedding light on the potential and challenges of leveraging neural attention. We conclude by introducing the novel concept of follow-up attention that leverages the attention signal of LLMs to harness their knowledge for supporting developers in code exploration tasks.
Speaker – Matteo Paltenghi
Matteo Paltenghi is a doctoral researcher from University of Stuttgart with expertise at the intersection of artificial intelligence and software engineering. With a recent collaboration at GitHub Next, he harnessed Large Language Models for code exploration. His Ph.D. work, advised by Prof. Dr. sc. Michael Pradel, encompasses software engineering, AI, and quantum computing, with presentations at top conferences like ASE 21, OOPSLA 22 and ICSE 23. Prior to this, he spend 9 months at CERN for his Master Thesis on anomaly detection on data center.
Matteo holds a double degree M.Sc. Computer Science and Engineering from Politecnico di Milano and TU Berlin, proceeded by a B.Sc. from Politecnico di Milano. He recently started serving as reviewer (TOSEM, JSSoftware) and as session chair at MSR 23. Recently, he was also one of the few young researchers selected for participation in the Heidelberg Laureate Forum (HLF 23).
Meetup group – https://www.meetup.com/machine-learning-methods-in-software-engineering/
Видео Exploring Human and Neural Attention on Source Code: Insights and Applications канала JetBrains Research
The results uncover correlations and divergences, shedding light on the potential and challenges of leveraging neural attention. We conclude by introducing the novel concept of follow-up attention that leverages the attention signal of LLMs to harness their knowledge for supporting developers in code exploration tasks.
Speaker – Matteo Paltenghi
Matteo Paltenghi is a doctoral researcher from University of Stuttgart with expertise at the intersection of artificial intelligence and software engineering. With a recent collaboration at GitHub Next, he harnessed Large Language Models for code exploration. His Ph.D. work, advised by Prof. Dr. sc. Michael Pradel, encompasses software engineering, AI, and quantum computing, with presentations at top conferences like ASE 21, OOPSLA 22 and ICSE 23. Prior to this, he spend 9 months at CERN for his Master Thesis on anomaly detection on data center.
Matteo holds a double degree M.Sc. Computer Science and Engineering from Politecnico di Milano and TU Berlin, proceeded by a B.Sc. from Politecnico di Milano. He recently started serving as reviewer (TOSEM, JSSoftware) and as session chair at MSR 23. Recently, he was also one of the few young researchers selected for participation in the Heidelberg Laureate Forum (HLF 23).
Meetup group – https://www.meetup.com/machine-learning-methods-in-software-engineering/
Видео Exploring Human and Neural Attention on Source Code: Insights and Applications канала JetBrains Research
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![Персистентная семантика файловой системы ext4 и верификация в ней](https://i.ytimg.com/vi/TjCvjpKj8jg/default.jpg)
![Унификация посредством поиска путей с контекстно-свободными ограничениями в графе](https://i.ytimg.com/vi/3eLTHe6crrw/default.jpg)
![Автоматно-функциональная парадигма программирования реактивных систем](https://i.ytimg.com/vi/iyw98xpsJgw/default.jpg)
![Гарантии прогресса в слабых моделях памяти](https://i.ytimg.com/vi/9Aut9ndulE4/default.jpg)
![Roman Venediktov "Type inference for GADTs in Kotlin"](https://i.ytimg.com/vi/L-lk18JVRT0/default.jpg)
![Effective Programming in OCaml](https://i.ytimg.com/vi/plFFZcqBOyk/default.jpg)
![Lecture 5. Large-scale electrophysiology (Computational Neuroscience Course)](https://i.ytimg.com/vi/c8jFeppebzc/default.jpg)
![Weak Memory Models 101](https://i.ytimg.com/vi/bacvkX_Hyqs/default.jpg)
![Designing Efficient Systems with Multi-Stage Programmig...](https://i.ytimg.com/vi/ojT8WHLQ3Ug/default.jpg)
![Введение в научное программирование на Kotlin (2021). Лекция 1](https://i.ytimg.com/vi/phpScXEYyTw/default.jpg)
![JetBrains Research: Looking Back at 2021 (Vladimir Ulyantsev)](https://i.ytimg.com/vi/HRWw_Eoyh6A/default.jpg)
![JetBrains Research: Looking Back at 2021 (Nikita Koval)](https://i.ytimg.com/vi/Tuc-7Bj1lHM/default.jpg)
![Lecture 9. Plasticity and learning](https://i.ytimg.com/vi/9ztkqzPgbH8/default.jpg)
![Lecture 2. Biological Perspective (Computational Neuroscience Course)](https://i.ytimg.com/vi/VBMoYSvOylA/default.jpg)
![Оптимизация эксперимента с помощью информации Фишера](https://i.ytimg.com/vi/MBRYhVju5W4/default.jpg)
![Framework for closed-loop formal verification of distributed automation software](https://i.ytimg.com/vi/DJRkMn-tSgc/default.jpg)
![Изучение мюонной компоненты широких атмосферных ливней в данных эксперимента SUGAR](https://i.ytimg.com/vi/XL1qr0eWhjs/default.jpg)
![Flow2Vec: Value-Flow-Based Precise Code Embedding](https://i.ytimg.com/vi/RpHsxIMyXJU/default.jpg)
![Semantics of First-order Horn Clause Logic](https://i.ytimg.com/vi/ABN2yjh588k/default.jpg)
![Презентация VisionForge Framework](https://i.ytimg.com/vi/imQWuCckkcE/default.jpg)