12. Feature Extraction
When using linear hypothesis spaces, one needs to encode explicitly any nonlinear dependencies on the input as features. In this lecture we discuss various strategies for creating features. Much of this material is taken, with permission, from Percy Liang's CS221 course at Stanford.
Access the full course at https://bloom.bg/2ui2T4q
Видео 12. Feature Extraction канала Inside Bloomberg
Access the full course at https://bloom.bg/2ui2T4q
Видео 12. Feature Extraction канала Inside Bloomberg
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
Spotlight On: 2021 Damien Awards with Hetrick-Martin InstituteBloomberg's Distinguished Engineers Speaker Series: Benjamin DayBloomberg Cornell Tech Series: Josh Silverman, CEO of Etsy - Highlight2019 D4GX Immersion Day Project @ Love City Strong VIBloomberg Cornell Tech Series: Rob Wiesenthal, CEO of BLADE - Full InterviewChromatic Patterns for Bloomberg, by Judy Ledgerwood at 919 3rd Ave , NYCBloomberg in the Computer History MuseumBloomberg's Distinguished Engineers Speaker Series: Kyle KingsburyD4GX 2018 Open Session & KeynoteGetting the Job Done During COVID-19 - Bloomberg News and MediaHelp London Breathe: Tree PlantingData for Good Exchange 2015 (Bloomberg)Day in the Life of a Bloomberg Engineer: DanielSpotlight On: Feeding New Yorkers during Covid with PublicolorData for Good Exchange 2015: Vlad Kliatchko On Data and TechnologyD4GX 2018: Sustainable Finance: Leveraging Capital for Gender Equality & Climate Change ResponseBloomberg's Distinguished Engineers Speaker Series: Danielle LeongBloomberg Cornell Tech Speaker Series: Foursquare Explains Location Intelligence (Full Version)Nitin Shares What It's Like Working in Sales in Singapore at BloombergBloomberg Working Families Make it WorkBuilding on a Strong Past, Sao Paulo Looks to the Future