Загрузка...

2- Listen Not | NextJs | OpenAI | WhisperV3 | Modal GPU | Typescript | Python | FastAPI | TailwindCS

2 - Listen Not | NextJs | OpenAI | WhisperV3 | Modal GPU | Typescript | Python | FastAPI | TailwindCSS | Shadcn | Containerisation

Host Your Own Large Language Models

This video conveys you how to host and use powerful large language models (LLMs) like Whisper v3 for tasks like audio transcription, without relying on third-party APIs. You'll build a web application that leverages the scalability and power of Modal Server, a serverless platform for deploying LLMs.

Key takeaways to learn about:
1. Deploy Whisper v3 with Flash Attention v2 for fast and accurate transcription.
2. Use Python and FastAPI to deploy serverless llm functions on modal server.
3. Talk to your own hosted models via your client application.

#LLM #whisper #audio-transcription #fastapi #python #AI #nextjs #tailwindcss #typescript #modal

P.S : There was an audio glitch during the last hour of the video that I didn't notice initially. Please adjust for this, or alternatively, use the generated captions.

Github Code Link : https://github.com/kuluruvineeth/listennot

Видео 2- Listen Not | NextJs | OpenAI | WhisperV3 | Modal GPU | Typescript | Python | FastAPI | TailwindCS канала Kuluru Vineeth
Страницу в закладки Мои закладки
Все заметки Новая заметка Страницу в заметки