Загрузка страницы

I Analyze Data - Best Practices for Implementing a Data Lake in Amazon S3 (Level 200)

Flexibility is key when building and scaling data lakes, and by choosing the right storage architecture, you can have the agility necessary to quickly experiment and migrate with the latest analytics solutions. In this session, we explore the best practices for building a data lake on Amazon S3, which allow you to leverage an entire array of AWS, open-source, and third-party analytics tools, helping you remain at the cutting edge. We explore use cases for analytics tools, including Amazon EMR and AWS Glue, and query-in-place tools like Amazon Athena, Amazon Redshift Spectrum, Amazon S3 Select, and Amazon S3 Glacier Select.

Kumar Nachiketa is a Storage Partner Solutions Architect at AWS for APJ, based in Singapore. With over 13 years in the data storage industry, He loves helping partners and customers in making a better decision on their data storage strategy and build a robust solution. He has recently been working with a partner across APJ to create innovative data storage practices.

Learn more about AWS at - https://amzn.to/2QhI1pa

Subscribe:
More AWS videos http://bit.ly/2O3zS75
More AWS events videos http://bit.ly/316g9t4

#AWS #AWSSummit #AWSEvents

Видео I Analyze Data - Best Practices for Implementing a Data Lake in Amazon S3 (Level 200) канала AWS Events
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
1 сентября 2020 г. 23:39:37
00:33:55
Яндекс.Метрика