Part 1 - A sample data pipeline using Python - Setup Project and Develop Program to read table list
Let us understand how to build end to end pipeline using Python. Go through these videos to learn more about Pycharm, Git as well as setting up and validating the project to get data from a source MySQL Database to a target Postgres Database.
Here are the list of videos.
Video 1 (this one) - Setup Project, GitHub Repository and Develop Code to read list of tables from a file: https://www.youtube.com/watch?v=BxLTTuLlvH0&list=PLf0swTFhTI8qhMM6uka63ASG1RrRHws4o&index=25
Video 2 - Develop required functionality to read the data from table and generate insert statement using metadata: https://www.youtube.com/watch?v=czJ0j-9FK08&list=PLf0swTFhTI8qhMM6uka63ASG1RrRHws4o&index=26
Video 3 - Develop required functionality to write data to tables in target database: https://www.youtube.com/watch?v=V1nbPEhjLow&list=PLf0swTFhTI8qhMM6uka63ASG1RrRHws4o&index=28
▶️Link for complete playlist - https://www.youtube.com/playlist?list=PLf0swTFhTI8pRV9DDzae2o1m-cqe5PtJ2
=============================================
More top rated #DataEngineering Courses from #itversity 👇🏻
_____________________________________________________
🔵Click below to get access to the course with one-month lab access for "Data Engineering Essentials Hands-on - SQL, Python and Spark" -
👉🏼🔗https://www.udemy.com/course/data-engineering-essentials-sql-python-and-spark/?referralCode=EEF55B4668DA42F6154D
🟢Data Engineering using AWS Analytics Services(Bestseller)
👉🏼🔗https://www.udemy.com/course/data-engineering-using-aws-analytics-services/?referralCode=99ADF846582E1D7DAEA7
🟢Data Engineering using Databricks features on AWS and Azure (Highly Rated)
👉🏼🔗https://www.udemy.com/course/data-engineering-using-databricks-on-aws-and-azure/?referralCode=EEA8219E6538F56E3B5B
🟢Data Engineering using Kafka and Spark Structured Streaming (NEW)
🔗https://www.udemy.com/course/data-engineering-using-kafka-and-spark-structured-streaming/?referralCode=30F204DEF4644FE9F112
If you are relatively new to Python feel free to follow our Mastering Python course at https://python.itversity.com
* 00:00:00 Launching Session
* 00:00:33 Prerequistes - Source and Target Databases using Docker. You can use https://github.com/dgadiraju/retail_db.git
* 00:02:20 Setup Project using Pycharm
* 00:15:26 Externalize Database Connectivity Information
* 00:33:09 Validate Python Programs with arguments using Pycharm Wizard
* 00:34:42 Add Environment Variables using Pycharm Wizard
* 00:39:56 Versioning the code using GitHub - Initialize, add and commit using local repository
* 00:46:30 Adding .gitignore to ignore non source code files.
* 00:51:22 Creating Repository using https://www.github.com and pushing from local repository to GitHub
* 00:53:53 Creating a file with list of tables to be loaded
* 00:59:34 Reading table list using Pandas
* 01:11:25 Committing and pushing changes to GitHub
* 01:20:55 Q&A
For quick itversity updates, subscribe to our newsletter or follow us on social platforms.
* Newsletter: http://notifyme.itversity.com
* LinkedIn: https://www.linkedin.com/company/itversity/
* Facebook: https://www.facebook.com/itversity
* Twitter: https://twitter.com/itversity
* Instagram: https://www.instagram.com/itversity/
* YouTube: https://www.youtube.com/itversityin
#Python #pipeline #PythonProgramming #Data #itversity
Join this channel to get access to perks:
https://www.youtube.com/channel/UCakdSIPsJqiOLqylgoYmwQg/join
Видео Part 1 - A sample data pipeline using Python - Setup Project and Develop Program to read table list канала itversity
Here are the list of videos.
Video 1 (this one) - Setup Project, GitHub Repository and Develop Code to read list of tables from a file: https://www.youtube.com/watch?v=BxLTTuLlvH0&list=PLf0swTFhTI8qhMM6uka63ASG1RrRHws4o&index=25
Video 2 - Develop required functionality to read the data from table and generate insert statement using metadata: https://www.youtube.com/watch?v=czJ0j-9FK08&list=PLf0swTFhTI8qhMM6uka63ASG1RrRHws4o&index=26
Video 3 - Develop required functionality to write data to tables in target database: https://www.youtube.com/watch?v=V1nbPEhjLow&list=PLf0swTFhTI8qhMM6uka63ASG1RrRHws4o&index=28
▶️Link for complete playlist - https://www.youtube.com/playlist?list=PLf0swTFhTI8pRV9DDzae2o1m-cqe5PtJ2
=============================================
More top rated #DataEngineering Courses from #itversity 👇🏻
_____________________________________________________
🔵Click below to get access to the course with one-month lab access for "Data Engineering Essentials Hands-on - SQL, Python and Spark" -
👉🏼🔗https://www.udemy.com/course/data-engineering-essentials-sql-python-and-spark/?referralCode=EEF55B4668DA42F6154D
🟢Data Engineering using AWS Analytics Services(Bestseller)
👉🏼🔗https://www.udemy.com/course/data-engineering-using-aws-analytics-services/?referralCode=99ADF846582E1D7DAEA7
🟢Data Engineering using Databricks features on AWS and Azure (Highly Rated)
👉🏼🔗https://www.udemy.com/course/data-engineering-using-databricks-on-aws-and-azure/?referralCode=EEA8219E6538F56E3B5B
🟢Data Engineering using Kafka and Spark Structured Streaming (NEW)
🔗https://www.udemy.com/course/data-engineering-using-kafka-and-spark-structured-streaming/?referralCode=30F204DEF4644FE9F112
If you are relatively new to Python feel free to follow our Mastering Python course at https://python.itversity.com
* 00:00:00 Launching Session
* 00:00:33 Prerequistes - Source and Target Databases using Docker. You can use https://github.com/dgadiraju/retail_db.git
* 00:02:20 Setup Project using Pycharm
* 00:15:26 Externalize Database Connectivity Information
* 00:33:09 Validate Python Programs with arguments using Pycharm Wizard
* 00:34:42 Add Environment Variables using Pycharm Wizard
* 00:39:56 Versioning the code using GitHub - Initialize, add and commit using local repository
* 00:46:30 Adding .gitignore to ignore non source code files.
* 00:51:22 Creating Repository using https://www.github.com and pushing from local repository to GitHub
* 00:53:53 Creating a file with list of tables to be loaded
* 00:59:34 Reading table list using Pandas
* 01:11:25 Committing and pushing changes to GitHub
* 01:20:55 Q&A
For quick itversity updates, subscribe to our newsletter or follow us on social platforms.
* Newsletter: http://notifyme.itversity.com
* LinkedIn: https://www.linkedin.com/company/itversity/
* Facebook: https://www.facebook.com/itversity
* Twitter: https://twitter.com/itversity
* Instagram: https://www.instagram.com/itversity/
* YouTube: https://www.youtube.com/itversityin
#Python #pipeline #PythonProgramming #Data #itversity
Join this channel to get access to perks:
https://www.youtube.com/channel/UCakdSIPsJqiOLqylgoYmwQg/join
Видео Part 1 - A sample data pipeline using Python - Setup Project and Develop Program to read table list канала itversity
Показать
Комментарии отсутствуют
Информация о видео
Другие видео канала
![10 Streaming Analytics - Kafka - High level architecture](https://i.ytimg.com/vi/RNE5X3SgZzw/default.jpg)
![Data Engineering Bootcamp - Free guided Q&A](https://i.ytimg.com/vi/UPLsm-_oDlg/default.jpg)
![Mastering Python - Understanding Map Reduce Libraries - 03 Filtering Data using filter](https://i.ytimg.com/vi/Y8n8WhAn1ww/default.jpg)
![Udemy Rating and Review Edit Feedback](https://i.ytimg.com/vi/sx-72yIFxFY/default.jpg)
![Managing Files and Folders in Linux](https://i.ytimg.com/vi/YORb6-_ye04/default.jpg)
![Apache Pig - Load data to pig relation without schema](https://i.ytimg.com/vi/GwN-xISCdaw/default.jpg)
![07 Python Fundamentals - IO Operations and processing data from files](https://i.ytimg.com/vi/raVr5gCrel4/default.jpg)
![Hadoop Certification - CCA - Data Analysis introduction](https://i.ytimg.com/vi/ZXlha0SDvic/default.jpg)
![03 Getting Started - Signup for AWS account](https://i.ytimg.com/vi/yOT-huVyE4k/default.jpg)
![CCA 175 Exam Taking Tips - Spark - Reading and Writing Files into Data Frames along with Compression](https://i.ytimg.com/vi/gI_4ssM8e84/default.jpg)
![29 Apache Spark - Core APIs - Save RDD is text file format - compressed](https://i.ytimg.com/vi/iFlFQwws8vc/default.jpg)
![Session 01 - Overview of Enterprise Infrastructure - 04 Personal Computer vs. Enterprise Server](https://i.ytimg.com/vi/buBkaTRVGuk/default.jpg)
![Launching Hive CLI on multinode Hadoop and Spark Cluster](https://i.ytimg.com/vi/lDOXBXhX0cU/default.jpg)
![Data Warehousing Concepts (Dimension Modeling)](https://i.ytimg.com/vi/huzByMuacJI/default.jpg)
![Setting up Environment using AWS Cloud9 - Associating Elastic IPs to Cloud9 Instance](https://i.ytimg.com/vi/WMdxeCChclE/default.jpg)
![Java Foundations Course](https://i.ytimg.com/vi/b8LAfLhW_SM/default.jpg)
![Big Data Introduction 12 Case Study Customer Analytics](https://i.ytimg.com/vi/LSSLRXcGWos/default.jpg)
![Hadoop Map Reduce Development - 02 Row Count - Develop Reducer](https://i.ytimg.com/vi/fZjnWr8kI_A/default.jpg)
![Hadoop Certification - CCAH - Understand basic design strategy for MapReduce v2 (MRv2)](https://i.ytimg.com/vi/xNbYiqWCgp0/default.jpg)
![Setup Ubuntu VM on GCP - Overview of GCP Web Console](https://i.ytimg.com/vi/1OVHjHTkP3M/default.jpg)
![24 Apache Spark - Core APIs - ranking - getTopNPrices](https://i.ytimg.com/vi/O3BToayO6EE/default.jpg)