Загрузка страницы

Building a hybrid data pipeline using Kafka and Confluent

Building a data pipeline on Google Cloud is one of the most common things customers do. Increasingly, customers want to build these data pipelines across hybrid infrastructures. Using Apache Kafka as a way to stream data across hybrid and multi-region infrastructures is a common pattern to create a consistent data fabric. Using Kafka and Confluent allows customers to integrate legacy systems and Google services like BigQuery and Dataflow in real time.

Learn how to build a robust, extensible data pipeline starting on-premises by streaming data from legacy systems into Kafka using the Kafka Connect framework.

This session highlights how to easily replicate streaming data from an on-premises Kafka cluster to Google Cloud Kafka cluster. Doing this integrates legacy applications and analytics in the cloud, using different Google services like AI Platform, AutoML, and BigQuery.

Speakers: Sarthak Gangopadhyay, Josh Treichel

Watch more:
Google Cloud Next ’20: OnAir → https://goo.gle/next2020

Subscribe to the GCP Channel → https://goo.gle/GCP

#GoogleCloudNext

DA211
event: Google Cloud Next 2020; re_ty: Publish; product: Cloud - Data Analytics - Dataflow, Cloud - Data Analytics - BigQuery; fullname: Sarthak Gangopadhyay, Josh Treichel;

Видео Building a hybrid data pipeline using Kafka and Confluent канала Google Cloud Tech
Показать
Комментарии отсутствуют
Введите заголовок:

Введите адрес ссылки:

Введите адрес видео с YouTube:

Зарегистрируйтесь или войдите с
Информация о видео
15 сентября 2020 г. 21:09:46
00:26:25
Яндекс.Метрика