4. Stochastic Gradient Descent
A recurring theme in machine learning is to formulate a learning problem as an optimization problem. Empirical risk minimization was our first example of this. Thus to do learning, we need to do optimization. In this lecture we present stochastic gradient descent, which is today's standard optimization method for large-scale machine learning problems.
Access the full course at https://bloom.bg/2ui2T4q
Видео 4. Stochastic Gradient Descent канала Inside Bloomberg
Access the full course at https://bloom.bg/2ui2T4q
Видео 4. Stochastic Gradient Descent канала Inside Bloomberg
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Spotlight On: 2021 Damien Awards with Hetrick-Martin InstituteBloomberg Cornell Tech Series: Josh Silverman, CEO of Etsy - Highlight2019 D4GX Immersion Day Project @ Love City Strong VIBloomberg Cornell Tech Series: Rob Wiesenthal, CEO of BLADE - Full InterviewBloomberg in the Computer History MuseumHelp London Breathe: Tree PlantingData for Good Exchange 2015 (Bloomberg)Day in the Life of a Bloomberg Engineer: DanielSpotlight On: Feeding New Yorkers during Covid with PublicolorData for Good Exchange 2015: Vlad Kliatchko On Data and TechnologyD4GX 2018: Sustainable Finance: Leveraging Capital for Gender Equality & Climate Change ResponseBloomberg Cornell Tech Speaker Series: Foursquare Explains Location Intelligence (Full Version)Bloomberg Working Families Make it WorkBloomberg Cornell Tech Series: Jarrid Tingle, Managing Partner of Harlem Capital (Highlight)Bloomberg San Francisco Tech Space OpeningAlbert Shares What It's Like Working in Analytics in Singapore at BloombergHighlights From StartupBusUK - It's A Wrap2017 Immersion Day @ Grand Central PartnershipHong Kong's office gets a bold makeover of traditional, yet futuristic visuals.Bloomberg Cornell Tech Series: Rob Wiesenthal, CEO of BLADE - HighlightHow Philanthropy and Engagement at Bloomberg is Impacting Communities Around The World