Загрузка страницы

Install 'ChatGPT' (LLaMA & Alpaca:) Locally. Better than ChatGPT? Tutorial in Urdu / Hindi

#jawadfarooq #chatgpt #llama
Facebok: https://www.facebook.com/AICanary/
Links:
Self instruct research paper - https://arxiv.org/pdf/2212.10560.pdf
Alpaca announcement - https://crfm.stanford.edu/2023/03/13/...
Dalai GitHub repo - https://github.com/cocktailpeanut/dalai
Node download link - https://nodejs.org/en/download

Weird character issue? Try this: https://github.com/cocktailpeanut/dal...

Bachay, yeh baat hai ke computer mein kuch models hote hain jinhe hum instructions dete hain aur wo humari baat sunte hain aur humare instructions ke mutabiq kaam karte hain. Magar aise models mein kuch masle hote hain jaise jhooti maloomat dene, samajh ki stereotyping karne aur toxic language ka upyog karna. Iss masle ko hal karne ke liye, academia ko madad ki zaroorat hai. Lekin OpenAI ke models ke ilawa academia mein aisa koi model nahi hai jo itna capable ho. Isliye humne apni findings share ki hai jin mein humne Alpaca naam ka model banaya hai. Alpaca model ko humne Meta's LLaMA 7B model ki help se fine-tune kiya hai. Humne Alpaca ko 52K instruction-following demonstrations se train kiya hai jo text-davinci-003 ki help se self-instruct ki tarah generate kiye gaye thay. Alpaca ka behavior OpenAI ke text-davinci-003 jaisa hai, lekin wo chhota hai aur easy aur cheap tariqe se reproduce kiya jaa sakta hai. Humne apni training recipe aur data release kiya hai, aur future mein model weights bhi release karenge. Hum interactive demo bhi provide kar rahe hain taake research community Alpaca ke behavior ko better understand kar sakein. Hum users se bhi request kerte hain ke agar unhein Alpaca mein kisi bhi unforeseen behavior ka pata chalta hai to wo humain report karein taake ham inki safety ko better understand kar sakein. Kyunki iss release ke saath kuch risks hain, so hamne apni thought process is open release ke saath share ki hai. Hum Alpaca ko strictly academia ke liye hi intended ker rahe hain aur iss ke commercial use ko prohibited ker rahe hain.

Academic budget mein aik high-quality instruction-following model train karne ke liye do important challenges hote hain: pehla challenge hai majboot pretrained language model aur dusra challenge hai high-quality instruction-following data. Alpaca ek language model hai jo OpenAI ke text-davinci-003 se generate kiye gaye 52K instruction-following demonstrations par LLaMA 7B model se supervised learning se fine-tune kiya gaya hai. Iske liye, humne self-instruct method par adhaarit hote hue instruction-following demonstrations generate kiya hai. Hugging Face ke training framework ka upyog karke LLaMA models ko fine-tune kiya gaya hai, aur Fully Sharded Data Parallel aur mixed precision training jaisi techniques ka upyog kiya gaya hai. Hamari pehli run ke liye, ek 7B LLaMA model ko fine-tune karne mein 3 ghante aur 8 80GB A100

Alpaca ek language model hai jo instruction-following ko sikhne mein madad deta hai. Hamne Alpaca ko text-davinci-003 model aur LLaMA 7B model ki madad se train kiya hai, jisse Alpaca ke paas kaafi taaqat aayi hai. Hamne apne Alpaca model ko 52K instruction-following examples ke saath train kiya hai. Hamari research mein, humne Alpaca ke behaviour ko test kiya hai aur uske results text-davinci-003 ke bilkul saman the. Alpaca ko academia ke liye hi intended kiya gaya hai aur hum users se request karte hain ki agar unhe kisi unforeseen behaviour ka pata chalta hai to wo humain report karein taaake ham inki safety ko better understand ker sakein.

Academic budget se ek high-quality instruction-following model train karne ke liye do important challenges hai: pehla challenge hai ek majboot pretrained language model aur dusra challenge hai high-quality instruction-following data. Alpaca ek language model hai jo OpenAI ke text-davinci-003 se generate kiye gaye 52K instruction-following demonstrations par LLaMA 7B model se supervised learning se fine-tune kiya gaya hai. Humne self-instruct method par adhaarit hote hue instruction-following demonstrations generate kiya hai. Iske liye, humne self-instruct seed set se shuruwat ki aur text-davinci-003 ko seed set ko in-context examples ke roop mein upyog karke aur instructions generate karne ke liye prompt kiya. Hamne generation pipeline ko simplify karke apni skill improve ki hai aur cost ko kam kiya hai. Hamare data generation process se 52K unique instructions aur unke corresponding outputs prapt huye hai, jinka cost OpenAI API ka upyog karke kam se kam $500 tak hai. Humne Hugging Face ke training framework ka upyog karke LLaMA models ko fine-tune kiya hai, aur Fully Sharded Data Parallel aur mixed precision training jaisi techniques ka upyog kiya hai. Hamari pehli run ke liye, ek 7B LLaMA model ko fine-tune karne mein 3 ghante aur 8 80GB A100s par lagaav less than $100 ki cost hui. Ham yah note karte hai ki training efficiency ko improve kiya ja sakta hai taaki cost ko aur kam kiya ja sake.

Видео Install 'ChatGPT' (LLaMA & Alpaca:) Locally. Better than ChatGPT? Tutorial in Urdu / Hindi канала Jawad Farooq
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
1 апреля 2023 г. 22:10:05
00:05:49
Яндекс.Метрика