Docker Crash Course for Absolute Beginners

Docker Crash Course for Absolute Beginners

Introduction

  • Welcome to our Docker Crash Course for Absolute Beginners! In this Blog Post, we'll delve into the main concepts of Docker, giving you a hands-on experience. Whether you're new to Docker or need to level up your engineering skills quickly, this blog post is perfect for you.

  • First, we'll explain what Docker is, why it was created, and the problems it solves in engineering and software development.

Understanding Docker

  • Docker is a virtualization software that simplifies application development and deployment by packaging applications into containers. These containers include everything the application needs to run, making them easy to share and distribute.

  • Before Docker, developers had to install and configure all the services an application depended on directly on their operating system, leading to complexity and potential errors.

  • With Docker, developers can run services as containers, eliminating the need for manual installation and configuration. This standardizes the development environment and accelerates the development process.

Docker vs. Virtual Machines

  • Docker virtualizes the application layer of the operating system, while virtual machines virtualize the entire operating system. This difference results in smaller package sizes, faster startup times, and improved compatibility with Docker.

  • Docker Desktop for Windows and Mac enables running Linux-based containers on non-Linux operating systems, expanding Docker's capabilities for local development.

Installing Docker

To install Docker, follow these steps:

  1. Go to the official Docker website and navigate to the installation guide.
  2. Choose the appropriate installation guide for your operating system (e.g., Windows, Mac, Linux).
  3. Follow the instructions provided in the installation guide. Ensure to check system requirements and choose the guide that matches your computer specifications.
  4. Once downloaded, run the installer and follow the installation prompts.

Understanding Docker Desktop

Docker Desktop is a tool that facilitates running Linux-based images on different operating systems. It includes:

  • Docker Engine: The main component that enables virtualization.
  • Command Line Interface (CLI) Client: Allows executing Docker commands via the terminal.
  • Graphical User Interface (GUI) Client: Provides a user-friendly interface for Docker operations.

Docker Images

A Docker image is a packaged application artifact that includes:

  • Application code.
  • Environment configuration, such as the operating system and necessary tools like Node.js or Java runtime.
  • Environment variables and configurations required by the application.

Docker Containers

A Docker container is a running instance of a Docker image. When an image is run on an operating system, it starts the application in a pre-configured environment, creating a container. Multiple containers can be created from the same image.

Docker CLI

Docker CLI is a command-line interface client that interacts with the Docker Engine. It allows executing various Docker commands, such as:

  • docker images: Lists locally available Docker images.
  • docker ps: Lists running Docker containers.
  • docker run: Runs a command in a new container.
  • docker build: Builds a Docker image from a Dockerfile.
  • docker pull: Pulls an image or a repository from a registry.
  • docker push: Pushes an image or a repository to a registry.
  • docker stop: Stops one or more running containers.
  • docker start: Starts one or more stopped containers.
  • docker restart: Restarts one or more containers.
  • docker pause: Pauses all processes within one or more containers.
  • docker unpause: Unpauses all processes within one or more containers.
  • docker rm: Removes one or more containers.
  • docker rmi: Removes one or more images.
  • docker exec: Executes a command in a running container.
  • docker inspect: Displays detailed information about one or more containers or images.
  • docker logs: Fetches the logs of a container.
  • docker diff: Inspects changes to files or directories on a container's filesystem.
  • docker top: Displays the running processes of a container.
  • docker stats: Displays a live stream of resource usage statistics for one or more containers.
  • docker network: Manages Docker networks.
  • docker volume: Manages Docker volumes.
  • docker-compose: Manages multi-container Docker applications with a YAML file.
  • docker swarm: Manages Docker Swarm clusters.
  • docker service: Manages Docker services within a Swarm cluster.
  • docker node: Manages Swarm nodes.
  • docker stack: Manages Docker stacks within a Swarm cluster.
  • docker config: Manages Docker configs.
  • docker secret: Manages Docker secrets.
  • docker history: Displays the history of an image.
  • docker tag: Tags an image into a repository.
  • docker login: Logs in to a Docker registry.
  • docker logout: Logs out from a Docker registry.
  • docker version: Shows the Docker version information.
  • docker info: Displays system-wide information about Docker.

How to Obtain Docker Images

Docker images can be obtained from Docker Registries. Docker Hub is the largest registry, offering a vast collection of official and community-contributed images. These images are stored in repositories, organized by namespaces and tags.

Docker Registries

Docker Registries act as storage for Docker images. They provide access to ready Docker images online, allowing users to download and use them to run containers. Docker Hub is the most prominent registry, hosting official images and those contributed by the community.

Official Docker Images

Official Docker images are maintained and verified by Docker or the technology creators themselves. These images undergo review and publication by dedicated teams to ensure adherence to security and production best practices.

Versioning and Tags

Docker images are versioned, with each version identified by a tag. Tags allow users to specify which version of an image they want to use. The latest tag refers to the most recent version of an image.

Obtaining Docker Images Locally

To download a Docker image locally, you can use the docker pull command followed by the image name and tag. If the tag is omitted, Docker will retrieve the latest version of the image.

Running Containers from Docker Images

Containers are instantiated from Docker images. To start a container from a local image, you use the docker run command followed by the image name and tag. Optionally, you can add the -d flag to run the container in detached mode, freeing up the terminal.

Running Containers with Remote Images

Docker allows you to run containers from images that are not stored locally. When you specify an image in the docker run command that is not available locally, Docker automatically pulls it from the default registry, such as Docker Hub.

Accessing Docker Containers

Now that we've created and run a Docker container, the next step is to access it. Since Docker containers typically run in isolated networks, accessing them from our local machine requires a bit of configuration.

Port Binding

  1. Understanding Ports:

    • Each application running inside a container operates on a specific port, such as 80 for nginx or 6379 for Redis.
    • By default, containers run on their own internal network and aren't directly accessible from our local machine.
  2. Port Binding:

    • To access a container, we perform what's called port binding.
    • Port binding involves mapping a port on our local machine to a port inside the container.
    • For example, we can bind port 80 of the nginx container to port 9000 on our local machine.

Exposing Containers

  1. Stopping and Creating Containers:

    • Before binding ports, we stop the running container and create a new one.
    • We use Docker stop to halt the container, and then Docker run to start it again.
  2. Port Binding Flag:

    • When starting the container with Docker run, we include the -p flag to specify the port binding.
    • Example: Docker run -d -p 9000:80 nginx binds port 80 of the container to port 9000 on the local machine.

Accessing the Container

  1. Verification:

    • After port binding, we can access the container's application by navigating to localhost:<local_port> in a web browser.
  2. Example:

    • If we bound port 80 of the nginx container to port 9000 locally, we'd access it via localhost:9000.

Managing Containers

  1. Container IDs:

    • Docker assigns unique IDs to each container.
    • Commands like Docker stop, Docker start, and Docker logs require the container ID.
  2. Naming Containers:

    • Instead of using IDs, containers can be assigned meaningful names using the --name flag during creation.
    • Example: Docker run --name web_app -d -p 9000:80 nginx.

Registry and Repository

  1. Private Docker Registries:

    • Companies often use private registries to store their Docker images securely.
    • AWS ECR, Google Container Registry, and Azure Container Registry are examples.
  2. Docker Hub:

    • Docker Hub serves as a public image registry, but it also offers private repositories for secure image storage.
    • Users can create accounts to manage their images.
  3. Repository vs. Registry:

    • A registry is the overarching service for storing Docker images.
    • Within a registry, repositories are used to organize images by application or project.

Creating Custom Docker Images for Applications

Now, let's delve into how you can create your own Docker image for your application.

Understanding the Process

Once your application development is complete and ready for release, packaging it into a Docker image is a common practice. This image contains everything your application needs to run, including dependencies, libraries, and configurations. By encapsulating your application and its environment into a Docker image, you ensure consistency and portability across different environments.

Creating a Dockerfile

To build a Docker image, you need to define how the image should be constructed. This is done through a file called a Dockerfile. A Dockerfile contains a set of instructions that Docker follows to build the image. Let's break down the key components of a Dockerfile:

  1. Base Image: Every Docker image starts with a base image, which serves as the foundation. This base image typically includes a minimal operating system along with necessary tools and dependencies. In our case, we're using a Node.js application, so our base image includes Node.js and npm.

  2. Defining Dependencies: Once we have the base image, we specify any additional dependencies required by our application. This could include libraries, packages, or frameworks necessary for the application to function correctly.

  3. Copying Application Code: We then copy our application code into the Docker image. This includes the source code files and any configuration files needed by the application.

  4. Setting Working Directory: It's essential to set the working directory inside the Docker image to ensure that subsequent commands are executed in the correct context.

  5. Installing Dependencies: With the application code in place, we install any dependencies using package managers like npm or pip. This ensures that all required libraries are available within the Docker image.

  6. Running the Application: Finally, we specify the command to run our application. This could be starting a server, running a script, or executing any other entry point for the application.

Building the Docker Image

Once we have defined our Dockerfile, we use the docker build command to build the Docker image. This command reads the instructions from the Dockerfile and creates an image based on those instructions. We specify the name and tag for the image along with the location of the Dockerfile.

Running the Docker Container

Once the Docker image is built, we can run it as a Docker container using the docker run command. This command starts a container based on the specified image, allowing us to interact with our application. We can expose ports, mount volumes, and configure other settings as needed to suit our application requirements.

Understanding Docker in the Software Development Lifecycle

In this final part of the blog post, let's explore how Docker fits into the broader software development and deployment process. We'll consider a simplified scenario to illustrate Docker's role in each step of the development lifecycle.

Development Environment

Imagine you're developing a JavaScript application on your local machine. Instead of installing a MongoDB database directly on your laptop, you opt to use a Docker container. You download a MongoDB container from Docker Hub and integrate it with your JavaScript application.

Version Control and Continuous Integration

Once you've developed your application locally, you commit your code to a version control system like Git. This triggers a continuous integration (CI) process, such as a Jenkins build. Jenkins compiles your JavaScript application and creates artifacts. Additionally, it builds a Docker image from your application artifact.

Private Docker Repository

The Docker image produced by the CI process is then pushed to a private Docker repository. In a corporate environment, it's common to have a private repository to manage and store proprietary images securely.

Deployment to Development Server

Next, the Docker image needs to be deployed to a development server. The development server pulls the image from the private repository and also fetches the MongoDB container from Docker Hub. Now, both your custom application container and the MongoDB container are running on the development server, communicating with each other as a cohesive application.

Testing and Development

With the application deployed on the development server, testers or other developers can access it for testing or further development. This setup allows for easy collaboration and testing within the development team.

Did you find this article valuable?

Support Mojtaba Maleki by becoming a sponsor. Any amount is appreciated!