Level: | For IT professionals |
Provider: | Other |
Length (days): | 6 |
Hours/day: | 4 |
Delivery method: | On-line |
Price: | 1.380$ + VAT |
As the leading containerization platform, Docker simplifies building, shipping, and running applications at any scale by providing a lightweight, portable, and consistent runtime environment.
Docker is widely used by developers for application packaging, IT professionals for infrastructure management, and DevOps teams for automating deployments in cloud environments and microservices architectures.
For the past several years, the renowned Stack Overflow Developer Survey has consistently ranked Docker among the most loved and widely used technologies.
As the leading containerization platform, Docker simplifies building, shipping, and running applications at any scale by providing a lightweight, portable, and consistent runtime environment. Docker is widely used by developers for application packaging, IT professionals for infrastructure management, and DevOps teams for automating deployments in cloud environments and microservices architectures.
Using Docker effectively is no longer optional—it's a critical skill for staying competitive in today's software industry.
This training is designed for anyone who wants to confidently use Docker for real-world applications—whether you're new to containers or looking to deepen your understanding.
To grasp the key concepts, that are particularly valuable in development settings, you'll gain hands-on experience with
Beyond development, you'll also learn how to integrate Docker into business environments from scratch: setting up Docker for teams, managing private registries and implementing access controls for secure collaboration.
On the production side, you'll explore automation for CI/CD pipelines, container monitoring, and strategies for maintaining reliable deployments.
By the end, you'll be ready to integrate Docker seamlessly into your workflow and use it productively on your own.
Finally, you'll also receive a brief introduction to Kubernetes—the industry-standard orchestration tool—so you understand when and why to use it for managing containerized workloads, even at the highest levels of scale and complexity.
Here, you will learn what containers are and how to manage them using Docker CLI commands. You'll work with starting, stopping, restarting, pausing, and deleting containers, as well as inspecting their states. We'll also cover container logs, process management, and restart policies, ensuring you can effectively control and troubleshoot running containers.
Docker images are the foundation of containers. In this module, you'll learn how to pull, list, inspect, and remove images using the Docker CLI. We'll cover image layers, caching, and tagging strategies to help you manage versions efficiently. You'll also explore optimizing image sizes and understanding the difference between official, custom, and third-party images.
You'll learn how to create your own custom images using Dockerfiles while containerizing a real-world full-stack application step by step. We'll cover Dockerfile syntax and structure, best practices for efficient builds, and techniques like layer caching and multi-stage builds to optimize image size and speed. By the end, you will have successfully containerized both a backend and frontend.
By default, containers don't store data permanently. This module covers how to keep data persistent using volumes and bind mounts. You'll explore the differences between stateful and stateless applications and learn to set up database containers with volumes for reliable storage. We'll also look at how bind mounts streamline development by syncing files between the host and containers.
Containers need to communicate—with each other, with services, and with the outside world. This module focuses on bridge networks, Docker's default networking mode. You'll learn how to connect containers, enable inter-container communication, and manage network access. We'll also cover best practices for securing container traffic.
Managing multi-container applications manually can be tedious. With Docker Compose, you'll learn how to define and run multi-container setups using a simple compose.yaml file. This module covers service dependencies, environment variables, networking between containers, and techniques to simplify local development and testing. By the end, you will have fully containerized a complex full-stack application and will be able to launch it effortlessly with a single command.
Once your application is containerized, you need a secure and efficient way to store and share images within your team or organization. In this module, you'll learn how to set up a private container registry and manage images effectively. We'll then dive into Harbor, an open-source registry, and explore how to configure user access with Role-Based Access Control (RBAC), enforce security policies, and integrate Trivy for automated vulnerability scanning to keep your containerized workflow secure and compliant.
Managing your own container registry gives you full control, but it also comes with administrative overhead. In this module, we'll explore fully managed cloud registries like Docker Hub, AWS Elastic Container Registry (ECR), GitHub Container Registry, and Google Artifact Registry. You'll learn how these services handle image storage, access control, and automated security scanning, as well as their pricing models.
Building and managing container images manually can be time-consuming and error-prone. In this module, you'll learn how to automate image creation using GitHub Actions, ensuring that every code change triggers a Docker build, tag, and push to a container registry.
In this module, we'll start with a simple deployment, demonstrating how to run containers in a fully managed cloud environment with minimal setup, including Google Cloud Run. From there, we'll explore best practices for container deployments, covering zero-downtime updates, rollback strategies, environment variables, resource limits, and strategies for horizontal scaling using multiple containers and load balancing to ensure smooth and scalable application delivery.
Keeping track of container performance and resource usage is essential in production environments. In this module, you'll learn how to monitor Docker containers using cAdvisor for real-time container metrics, Prometheus for data collection and alerting, and Grafana for visualizing key performance indicators. We'll cover CPU, memory, network, and disk usage monitoring, setting up custom dashboards, and implementing alerting strategies to detect and respond to issues before they impact your application.
Keeping containers up to date is crucial for security and stability, but manually redeploying them can be tedious. In this module, you'll learn how to use Watchtower to automate container updates, ensuring your running services always use the latest images. We'll cover how Watchtower detects new image versions, updates containers seamlessly, and handles restarts with minimal downtime. Additionally, we'll discuss when to use automated updates in production and when manual control is preferable to avoid unintended disruptions.
As applications grow, managing containers manually becomes impractical. This module introduces Kubernetes as the industry-standard orchestrator and explains when to use it over Docker alone. You'll explore Kubernetes architecture (Pods, Nodes, Control Plane) and key concepts like declarative configuration and desired state management to understand how Kubernetes automates deployment and scaling.
Running applications in Kubernetes requires more than just deploying containers. Here, you'll learn how to define and manage workloads using Deployments, StatefulSets, and DaemonSets—each designed for different types of applications. We'll cover scaling strategies, rolling updates, and rollback mechanisms to ensure reliability and high availability in production environments.
Containers inside Kubernetes need to communicate efficiently, both with each other and the outside world. In this module, you'll learn how Kubernetes Services enable reliable networking between Pods. We'll explore different service types—ClusterIP, NodePort, LoadBalancer, and Ingress—and show how they facilitate internal networking, external exposure, and traffic routing for your applications.
Unlike traditional containers, applications running in Kubernetes often require persistent storage. This module covers Persistent Volumes (PVs), Persistent Volume Claims (PVCs), StorageClasses, and dynamic provisioning. You'll learn how Kubernetes handles stateful applications, data persistence, and volume management across clusters.
In this final hands-on project, you'll apply everything you've learned to deploy a real-world full-stack application on Kubernetes. You'll set up a frontend, backend, and database, configure networking with Services and Ingress, and implement scaling, storage, and monitoring to make it production-ready.
We'll also discuss real Kubernetes clusters (e.g., GKE, EKS, AKS) and guide you on next steps for running Kubernetes in a cloud environment.
The course is technical, because of this it is expected that participants are capable of typing and have a general knowledge about computers and programs.
To participate in a course it's helpful to have a foundational understanding of certain concepts and technologies. Here are some general prerequisites that are recommended but not mandatory:
For more information please call +386 1 568 40 40 or send an e-mail to trzenje@housing.si
Social Media
Live Contact