Ever pondered the rising popularity of Docker among developers in the contemporary development landscape? In this piece, I will delve into the merits of employing Docker and how it facilitates running any application using a Docker image, as opposed to running the app on a local machine. I will also shed light on a real-life situation where Docker turns out to be invaluable.
Consider a scenario where you are engaged in an image processing project that necessitates the installation of the 'pillow' package on your Windows device. But what if your work laptop's administrative privileges thwart new installations or alterations in the Program Files directory? Docker comes to the rescue in such predicaments.
Docker is especially advantageous for developers immersed in AI/ML projects, as it allows local testing of server applications, even when specific Python packages are not compatible with Windows devices. Docker Desktop paves the way for a consistent environment to create and manage containers, thereby mitigating discrepancies between development, testing, and production environments. It also streamlines the management of application dependencies and configurations within Docker containers.
For a more in-depth comprehension of the distinction between running an application locally and using Docker, let's examine Docker images and Docker containers.
A Docker image is essentially a filesystem snapshot encapsulating the application code, runtime, system tools, libraries, and other necessary elements for running the application. These images are crafted using instructions outlined in a Dockerfile, which serves as the blueprint or template for generating Docker containers.
In contrast, a Docker container is an operational instance of a Docker image. Containers provide lightweight, portable, and isolated environments for running applications along with their dependencies. Operating on the Docker Engine, containers maintain isolation from each other and the host system. Multiple containers can be spawned from a single Docker image, each boasting its own filesystem and secluded runtime environment. Containers can be initiated, halted, or modified without influencing the underlying image or other containers.
To embark on your Docker journey, adhere to these steps:
Establish a project folder comprising a Python file (.py), a Dockerfile (sans extension), and a requirements.txt file. Your Python file houses your code, the Dockerfile includes directives to construct the Docker image, and requirements.txt contains the necessary Python packages for your application.
The Dockerfile is a script composed of a sequence of instructions such as:
FROM: Designates the base image to commence with, like FROM ubuntu:18.04.
RUN: Executes commands within the image during the build phase, for instance, RUN apt-get update && apt-get install -y python3.
COPY: Transfers files and directories from the build context (typically the local filesystem) into the image, such as COPY . /app.
WORKDIR: Designates the working directory for succeeding instructions, like WORKDIR /app.
EXPOSE: Notifies Docker that the container will listen on specified network ports at runtime, for example, EXPOSE 8080.
CMD: Specifies the default command to execute upon container startup, like CMD [“python3”, “app.py”].
Right-click within the project folder and choose Open in Terminal to build a Docker image from a Dockerfile. Utilize the command -
docker build -t <your-image-tag> .
Upon the creation of the Docker image, instantiate a Docker Container within it using the command below -
docker create — name <container_name> <image_name>
Now, initiate/start the container to execute the program.
docker start <container_name>
Head over to the Docker desktop app, click on the container, and then click on the Logs and Files section to verify successful processing.
This process will seamlessly run your program without necessitating installations or local machine modifications.