Загрузка страницы

PEFT vs Full Fine-Tuning PEFT - explained by Dir of Research at NVIDIA 🚀 #finetuning #interview

Full fine-tuning can wreck your model when data is limited—leading to catastrophic forgetting and loss of general knowledge. Enter PEFT (Parameter-Efficient Fine-Tuning). 🔥 : Pavlo Molchanov from @NVIDIA – Exploring efficient architectures & the future of AI reasoning

🔗 Full interview 👉 https://youtu.be/3ugqpxhycPk
Watch the full interview with :
Diego Guerra Orozco from @meta – Discussing why open-source AI is the future
Loubna Ben Allal from @HuggingFace – Breaking down the rise of small models & on-device AI
Pavlo Molchanov from @NVIDIA – Exploring efficient architectures & the future of AI reasoning

#AI #LoRA #DORA #finetuning #machinelearning #llms #lorax

🔔 Don’t miss this deep dive into the next frontier of AI 👉 https://www.youtube.com/ @DevIntheDetails

👉 Subscribe to mu newsletter AI insights and building an AI startup: https://devinthedetail.substack.com/

👥 Follow me on LinkedIn: https://www.linkedin.com/in/devvret-rishi-b0857684/
🚀 Check out what I’m building at Predibase: https://predibase.com/

#opensourceai #finetuning #llm #aiarchitecture #AIResearch #metaai #huggingface #nvidia #llama3 #aifordevelopers #aicommunity #techtalk #aiinnovation #futureofai

Видео PEFT vs Full Fine-Tuning PEFT - explained by Dir of Research at NVIDIA 🚀 #finetuning #interview канала Dev In the Details
Показать
Страницу в закладки Мои закладки
Все заметки Новая заметка Страницу в заметки