Logo video2dn
  • Сохранить видео с ютуба
  • Категории
    • Музыка
    • Кино и Анимация
    • Автомобили
    • Животные
    • Спорт
    • Путешествия
    • Игры
    • Люди и Блоги
    • Юмор
    • Развлечения
    • Новости и Политика
    • Howto и Стиль
    • Diy своими руками
    • Образование
    • Наука и Технологии
    • Некоммерческие Организации
  • О сайте

Скачать или смотреть Python Tutorial : Components of a data platform

  • DataCamp
  • 2020-04-15
  • 326
Python Tutorial : Components of a data platform
Introduction to data ingestion with SingerBuilding Data Engineering Pipelines in PythonDataCampPython Tutorialwant to learn PythonData Sciencehow to learn data scienceData Analyst with PythonData Scientist with PythonComponents of a data platform
  • ok logo

Скачать Python Tutorial : Components of a data platform бесплатно в качестве 4к (2к / 1080p)

У нас вы можете скачать бесплатно Python Tutorial : Components of a data platform или посмотреть видео с ютуба в максимальном доступном качестве.

Для скачивания выберите вариант из формы ниже:

  • Информация по загрузке:

Cкачать музыку Python Tutorial : Components of a data platform бесплатно в формате MP3:

Если иконки загрузки не отобразились, ПОЖАЛУЙСТА, НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если у вас возникли трудности с загрузкой, пожалуйста, свяжитесь с нами по контактам, указанным в нижней части страницы.
Спасибо за использование сервиса video2dn.com

Описание к видео Python Tutorial : Components of a data platform

Want to learn more? Take the full course at https://learn.datacamp.com/courses/bu... at your own pace. More than a video, you'll learn hands-on coding & quickly apply skills to your daily work.

---

Hi! I’m Oliver Willekens, a data engineer and instructor in this field at Data Minded. In companies today, people are trying to extract value from the tons of data they’re gathering. They’re doing this in an environment called “the data platform”, which is the start of our journey to create robust data pipelines.

While working through this course, you will learn
how to ingest data into the data platform using the very modular Singer specification,
the common data cleaning operations,
simple transformations using PySpark,
how and why to test your code,
and how to get your Spark code automatically deployed on a cluster.
These are the skills you will be able to apply in a wide variety of situations. And because of that, it’s important that you standardize the approach. You’ll see how we do this.

Note that there is a lot to be said about each of these topics, too much to fit into one DataCamp course. This course is only an introduction to data engineering pipelines.

Many modern organizations are becoming aware of just how valuable the data that they collected is. Internally, the data is becoming more and more “democratized”:

It is being made accessible to almost anyone within the company, so that new insights can be generated. Also on the public-facing side, companies are making more and more data available to people, in the form of e.g. public APIs.

The genesis of the data is with the operational systems, such as streaming data collected from various Internet of Things devices or websession data from Google Analytics or some sales platform. This data has to be stored somewhere, so that it can be processed at later times. Nowadays, the scale of the data and velocity at which it flows has lead to the rise of what we call “the data lake”.

The data lake comprises several systems, and is typically organized in several zones. The data that comes from the operational systems for example, ends up in what we call the “landing zone”. This zone forms the basis of truth, it is always there and is the unaltered version of the data as it was received. The process of getting data into the data lake is called “ingestion”.

People build various kinds of services on top of this data lake, like predictive algorithms, and dashboards for A/B tests of marketing teams. Many of these services apply similar transformations to the data. To prevent duplication of common transformations, data from the landing zone gets “cleaned” and stored in the clean zone. We’ll see in the next chapter what is typically meant by clean data.

Finally, per use case some special transformations are applied to this clean data. For example, predicting which customers are likely to churn is a common business use case. You would apply a machine-learning algorithm to a dataset composed of several cleaned datasets. This domain-specific data is stored in the business layer.

To move data from one zone to another, and transform it along the way, people build data pipelines. The word comes from the similarity of how liquids and gases flow through pipelines, in this case, it’s just data that flows.

The pipelines can be triggered by external events, like files being stored in a certain location, on a time schedule or even manually.

Usually, the pipelines that handle data in large batches, are triggered on a schedule, like overnight. We call these pipelines Extract, Transform and Load pipelines, or ETL pipelines in short.

There are typically many pipelines existing. To keep good oversight, these are triggered by tools that provide many benefits to the operators. We’ll be inspecting one such tool, the popular Apache Airflow, in the last chapter.

Good, now that you have a high-level overview of the data platform, let’s see how we can use it.

#DataCamp #PythonTutorial #BuildingDataEngineeringPipelinesinPython

Комментарии

Информация по комментариям в разработке

Похожие видео

  • О нас
  • Контакты
  • Отказ от ответственности - Disclaimer
  • Условия использования сайта - TOS
  • Политика конфиденциальности

video2dn Copyright © 2023 - 2025

Контакты для правообладателей [email protected]