Data Driven Python:
Basic Training

Data Driven Python:
Basic Training

Master your data-foundation with Python and prepare yourself for BigData.

With our expert-led training program,
designed to equip you with cutting-edge skills in modern application deployment and management.
Benefit from our wealth of experience from countless customer projects:

Experience a balanced mix of theory, live demonstrations and practical exercises.

Design and implement data-driven projects, mastering Python for data handling, governance, and architecture.

Learn to work with diverse databases, from relational to NoSQL, and orchestrate efficient ETL workflows.

Build scalable, event-driven data services with secure APIs and handle real-world Big Data challenges.

Data Driven Python Training – upcoming dates

11.11. – 12.11.2024

Data Driven Python in 2 Days

13.01. – 14.01.2025

Data Driven Python in 2 Days

26.02. – 27.02.2025

Data Driven Python in 2 Days

This course is tailored for developers and data engineers looking to strengthen their Python skills for data-driven projects. If you have a solid grasp of Python and want to dive deeper into designing data architectures, working with databases, and managing data pipelines, this course is perfect for you. Ideal for professionals looking to expand their expertise in data engineering and Big Data.

Practical Applications That We Will Cover in the Training:

  • 1

    Hands-on experience with Python basics for data handling, including lists, dictionaries, tuples, sets, and file interactions.

  • 2

    Understanding of data architecture concepts, such as monolith vs. microservices and Data Lakes vs. Databases.

  • 3

    Knowledge of various database types, their characteristics, and how to select the right one for your needs.

  • 4

    Practical experience with workflow orchestration tools like Apache Airflow and data pipeline monitoring.

After The Course, You Will Be Able To:

  • 1
    Design and develop robust data-driven projects using Python.
  • 2
    Understand the importance of data governance, lineage, and architecture in modern software development.
  • 3
    Work with various databases, including relational, document-based, time-series, and blob storage.
  • 4
    Implement ETL processes and workflow orchestration tools for efficient data processing.
  • 5
    Design and implement event-driven data processing pipelines using Kafka as a MessageBus.
  • 6
    Develop secure APIs for data access and design scalable data services and semantic layers.

The Data Driven Python training is perfect for you if…

  • You have basic Python knowledge
  • You are motivated to work with Kafka and event-based systems
  • You are open to feedback and want to learn collaboratively

The Data Driven Python training is not suitable for you if…

  • You are a beginner in programming and do not have basic Python knowledge.
  • You don’t want to learn the basics of data-driven programming with Python and obtain an overview of the different options available on how to handle data
  • You prefer a focus on big data applications without the recommended foundational knowledge

Agenda

Training

For small companies and teams that are new to the topic.

  • The importance of data in modern software development
  • Why Python is a preferred language for data-driven projects
  • Basic Python syntax and control structures
  • Introduction to Python datatypes focusing on data handling (lists, dictionaries, tuples, sets)
  • Using Python to interact with files and directories
  • Advanced features of Python’s built-in datatypes
  • When and why to use specific datatypes for data tasks
  • Definitions and differences: Database, Data Lake, Data Warehouse, Lakehouse
  • Understanding Data Lineage and Data Governance
  • The principles of data architecture
  • Comparing traditional and modern data architectures: Monolith vs. Microservices, Data Lakes vs. Databases
  • Characteristics of relational, document-based, time-series, and blob storage
  • ACID properties and their relevance across database types
  • Selecting the right database for your data needs
  • Detailed exploration of ACID properties
  • Data ingestion, transformation, and loading (ETL) fundamentals
  • Introduction to workflow orchestration tools (e.g., Apache Airflow)
  • Introduction to data transformation and data contracts with dbt (Data Build Tool)
  • Best practices for data pipeline monitoring, testing, and maintenance
  • Principles of event-driven data processing, how to use it and when to avoid it
  • Introduction to Kafka as MessageBus
  • Streaming batch data with Python, Faust, and how to monitor the process
  • The basics of data service design.
  • Review of Communication Protocols from TCP to REST and gRPC
  • Implementing APIs for data access.
  • Security and authorization for data services.
  • Introduction to Big Data and its challenges
  • Techniques for working with large datasets that don’t fit into memory
  • Utilizing Dask for parallel computing and large-scale data processing
  • Event-driven Architecture for scaling data processing pipelines

Customized

For large companies and teams that want to master special challenges.

  • Ihr Ökosystem
  • Ihre Best Practices
  • Ihre Probleme und Themen
  • The importance of data in modern software development
  • Why Python is a preferred language for data-driven projects
  • Basic Python syntax and control structures
  • Introduction to Python datatypes focusing on data handling (lists, dictionaries, tuples, sets)
  • Using Python to interact with files and directories
  • Advanced features of Python’s built-in datatypes
  • When and why to use specific datatypes for data tasks
  • Definitions and differences: Database, Data Lake, Data Warehouse, Lakehouse
  • Understanding Data Lineage and Data Governance
  • The principles of data architecture
  • Comparing traditional and modern data architectures: Monolith vs. Microservices, Data Lakes vs. Databases
  • Characteristics of relational, document-based, time-series, and blob storage
  • ACID properties and their relevance across database types
  • Selecting the right database for your data needs
  • Detailed exploration of ACID properties
  • Data ingestion, transformation, and loading (ETL) fundamentals
  • Introduction to workflow orchestration tools (e.g., Apache Airflow)
  • Introduction to data transformation and data contracts with dbt (Data Build Tool)
  • Best practices for data pipeline monitoring, testing, and maintenance
  • Principles of event-driven data processing, how to use it and when to avoid it
  • Introduction to Kafka as MessageBus
  • Streaming batch data with Python, Faust, and how to monitor the process
  • The basics of data service design.
  • Review of Communication Protocols from TCP to REST and gRPC
  • Implementing APIs for data access.
  • Security and authorization for data services.
  • Introduction to Big Data and its challenges
  • Techniques for working with large datasets that don’t fit into memory
  • Utilizing Dask for parallel computing and large-scale data processing
  • Event-driven Architecture for scaling data processing pipelines

Hear from our satisfied training attendees

A1 Telekom Austria AG

Reinhard Burgmann
Head of Data Ecosystem

„UTA coached my team along the development process of the migration plan of our on premises data lake to the public cloud.

The outstanding level of expertise, both on a technical and organizational level, ensured a well-structured and realistic migration plan including timeline, milestones, and efforts.

The enablement of my team was at the center of a very smooth collaboration. Through UTA, we achieved our goal faster and reduced risks of the migration project significantly.

I highly recommend UTA’s services!“

Vattenfall

Bernard Benning
BA Heat

„I recently attended Vattenfall IT’s online Kafka training day hosted by Ultra Tendency, and it was an enriching experience.

The trainer, Ahmed, did a fantastic job explaining the theory behind Kafka, and the emphasis on practical application was great. The hands-on programming exercises were particularly helpful, and I’ve never experienced training with so many interactive examples!

Overall, I highly recommend this training to anyone who wants to improve their Kafka knowledge interactively and gain valuable skills.“

VP Bank

Eisele Peer
Lead Architect & Head of IT Integration & Development

The MLOps training exceeded our expectations!

It offered a perfect blend of an overview, hands-on coding examples, and real-world use cases. The trainer answered all questions competently and adapted the content to fit our company’s infrastructure.

This training not only provided us with knowledge but also practical skills that we can apply immediately.

Your investment

1949 €plus VAT.
  • Gain a deep understanding of Python for data tasks, including working with key data structures and file systems.
  • Learn about modern data architectures and navigate the complexities of data governance, lineage, and database selection.
  • Master practical data workflows with tools like Apache Airflow and Kafka to build robust data pipelines.
  • Explore advanced Big Data processing techniques with Python, including parallel computing with Dask and event-driven architecture for large datasets.

Get to know your trainers

Marvin Taschenberger

Professional Software Architect, Ultra Tendency

Hudhaifa Ahmed

Senior Lead Big Data Developer Berlin Territory Manager, Ultra Tendency

Matthias Baumann

Chief Technology Officer & Principal Big Data Solutions Architect Lead, Ultra Tendency

Required hardware & infrastructure for your Docker Training

  • You will need a PC or Mac with a web browser and MS Teams.
  • During the training, we will provide you with a virtual machine with the required local dependencies, services and root access.
  • This VM has a running Kubernetes cluster on which you can test and execute the training instructions.
  • You can access the machine via a browser or SSH if you wish and the network restrictions allow it.