23rd Dec 2022

Fundamentals of Docker

Build React Application

Before Learning about the fundamentals of docker and how to use it is important to understand what docker is and why you should use it

What is Docker?

Docker is a containerization platform that allows for the creation, deployment, and handling of featherlight, isolated virtual operating systems called holders. Docker allows developers to package their applications and dependencies into a single container that can be easily shared and run on any system that has Docker installed, regardless of the underlying operating system or hardware

Why use docker?

You don’t need to worry about what host system the user is running as long as they have docker, your code will run.

The following are the reasons why you should consider use docker:

Portability: Docker allows you to package your application and its dependencies into a single container that can be easily moved between different environments

Isolation: It provides an isolated environment for your application to run in, which helps to prevent conflicts with other applications or services running on the same system

Efficiency: Docker containers are lightweight which means you can run Further holders on a single machine with minimum outflow.

Reproducibility: Docker makes it easy to reproduce your development, testing, and production environments, ensuring that your application runs consistently and reliably in different settings

Security: Docker provides several built-in security features, such as user namespaces, which help to protect your application and the host system from potential security threats.

Since we know what docker is and why you should use it let us learn the fundamentals of docker.

List of docker commands

The following are the list of docker commands that one should be familiar with when using docker and we will be using these commands in this tutorial

Following are the commands related to,

Docker images:

• docker images - gives the list of all images
• docker image <image_name> - delete an image
• docker build –tag <image_name><path> - build image from dockerfile

Docker containers:

• docker ps - returns a list of running containers
• The command "docker ps -a" returns a list of both stopped and operating containers.
• docker run <image_name> - runs a container from image
• docker run -p <host_port>: <container_port>- maps host port to container
• docker run -v <host_directory>: <container_directory> - mount host
• docker run -env <key>= <value>- pass environment variable
• docker inspect <container_id > - gives details of a container

Docker-compose:

• docker-compose build - builds images
• docker-compose up - starts containers
• docker-compose stop - stops running containers
• docker-compose rm - removes stopped containers

Docker specific keywords in docker-compose.yml file

• version: which version of docker-compose to use
• services: names of containers we want to run
• build: steps describing the build process
• context: where the Dockerfile is located at to build the image
• ports: assign the host computer's ports to the container.
• volumes: map the host machine or a docker volume to the container.
• environment: give the container access to the environment variables.
• depends_on: start the listed services before starting the current service

And finally keywords to use in dockerfile

• FROM image_name - starts build by layering onto an existing image
• The command to copy a file or directory from the host to the image is host_pathcontainer_path.
• RUN shell_command: This cmd executes a shell within the image.
• WORKDIR path - sets the current path in the image
• ENV variable value - sets the env variable equal to the value
• EXPOSE port - exposes a container port
• ENTRYPOINT [‘shell’, ‘command’] - prefixes to CMD
• Runtime shell commands are executed using CMD ['shell', 'command'].

What is docker image?

Docker images are self-contained packages that include all the components necessary to run an application as a container. These factors include the operation’s source law, as well as any dependences , libraries and tools demanded to execute the law. When a Docker image is run, it becomes as instance (or multiple instances) of an image(which is when a docker container is created)

Let’s look at an example of downloading a image from dockerhub (it is like a github but for docker images) using the following format

docker pull <IMAGE_NAME>

D:\Docker Tutorial>docker pull mongo

You can also run the following command

docker images

To get the list of docker images on your system

To get the list of docker images on your system

While it is possible to create a docker image from scratch, most developers use pre-existing images from public or private repositories.

What are docker containers?

The instances of Docker images that are currently in use are called containers. Let’s run our first container. And the command for doing so is of the following format

docker run <image_name>

D:\Docker Tutorial>docker run mongo

You can use the following command to get the list of running containers

docker ps

Docker Running Containers

It should be noted that every time you do the “docker run” command, it creates a new container(instance of the image)

While docker images are read-only files, Druggies can interact with holders and acclimate their settings conditions, change any data they want using longshoreman commands. You can enter into the terminal of the docker container by running a command of the following format(on windows),

docker exec -t <container_id>sh

D:\Docker Tutorial>docker exec -it a596d656de7b sh

You can exit the container with the command “exit”

You can also stop and remove the container using commands of the following format:

docker stop<container_id> (takes time to stop)

docker kill<container_id>(stops immediately)

docker rm<container_id>

You can also look for containers that have been stopped by using command,

docker ps -a

What is a docker file?

By writing a Dockerfile, you can specify the exact configuration and dependencies needed for your application, which can then be used by the Docker Engine to build a Docker Image

Let’s create a simple web server with the Python Flask framework

Initially, generate a new directory to store all the relevant files. Within the directory, create a server.py file and copy paste the below code,

from flask import Flask
  app = Flask(__name__)
  
  @app.route('/')
  def hello_world():
  return 'Hello, World!'
    
  if __name__ == '__main__':
  app.run(debug=True, host='0.0.0.0')
  

This is a basic web server that displays a page saying “Hello, World!”

Produce a train called “ Dockerfile ” and bury the below law

# 3-server/Dockerfile

  FROM  python:3.6   
  
  # Create a directory
  WORKDIR /app                  
  
  # Copy the app code
  COPY . ./                                  
  
  # Install Requirements
  RUN pip install Flask
  
  # Expose flask's port
  EXPOSE 5000
  
  # Run the server
  CMD python server.py
  

Let’s see what the commands does

The FROM is used to make an image by layering onto an being, here by specifying “FROM python:3.6” we are declaring to install python.

WORKDIR: creates the directory if it doesn’t exist and cd into it, similar to running “mkdir /app && cd /app”

Using RUN you can execute linux command, here we are running a command to install “Flask” in the container

By COPY command we are instructing to copy folders from current directory to /app/

EXPOSE command allows connections to port 5000 on the container. We also need to map the container port to the host which we will do during running the image for accessing from outside the image like chrome or postman.

Finally CMD is an entry point command (you can have only one in dockerfile) here we are instructing to start the app by running the command “python server.py”

Now, let’s build the docker image, go to the command line and run the following command,

D:\Docker Tutorial\Python DemoServer> docker build -t python-webapp

Let’s run the image and map the container’s port 5000 to port 8000 on our host , using the command,

D:\Docker Tutorial\Python DemoServer> docker run -p 8000:5000 python-webapp

Then go to localhost:8000 on your browser see our flask app running successfully,

DockerFile localhost img

What is docker compose?

Docker Compose is a tool that allows you to define and run multi-container Docker applications. It uses YAML files (which is similar to JSON, XML etc. and bear a bit like python where whitespaces count, and particulars are separated with colons) to define the services, networks, and volumes needed for your operation to run, By defining all the components of your application in a single file, Docker Compose makes it easy to manage the lifecycle of your application and its dependencies.

With Docker Compose, you can specify the relationships between containers and the configuration details of each container. This allows you to launch all the required containers with a single command , making it easy to deploy your application across different environments

Overall, Docker Compose simplifies the process of deploying and managing multi-container applications, making it a valuable tool for developers. Two or further holders in the same longshoreman network can communicate using only the vessel name no need to use the harborage, ip etc.

In this demo project we are going to create MongodbGUI which requires two docker containers to run simultaneously

you have to first download the docker images for mongodb and mongo-express from docker-hub by running the below commands one by one,

D:\Docker Tutorial\DemoApp> docker pull mongo

D:\Docker Tutorial\DemoApp> docker pull mongo-express

Let’s create a simple docker-compose.yaml file which tries to create two containers one from the mongo-express(the UI to connect with mongodb) docker image and the other from mongo (which is the image name of mongodb) image. Also Docker compose takes care of creating a common network

version: '3'
  services:
  mongodb:
      image: mongo
      ports:
        - 27017:27017
      environment:
        - MONGO_INITDB_ROOT_USERNAME=admin
        - MONGO_INITDB_ROOT_PASSWORD=password
    mongo-express:
      image: mongo-express
      restart: always # fixes MongoNetworkError when mongodb is not ready when mongo-express starts
      ports:
        - 8080:8081
      environment:
        - ME_CONFIG_MONGODB_ADMINUSERNAME=admin
        - ME_CONFIG_MONGODB_ADMINPASSWORD=password
        - ME_CONFIG_MONGODB_SERVER=mongodb
  

Let us look what each of the keyword in this file mean,

version: specifies which version of docker-compose to use(here we specify the latest version)

services:lists the containers we want to run from the specified image

image:name of the docker image

ports:map ports from the host machine to the container

environment:: to set the environment variable for the respective images

restart:as mentioned restarts mongo-express when mongodb is not ready and MongoNetworkError arises

Also we are connecting to the mongodb from mongo-express using the username and word we set for the mongodb vessel using the terrain keyword in the yaml train. You can check for different environment variables in the documentation of the relevant docker-image in docker hub

Lets run the docker-compose file using the following command,

D:\Docker Tutorial\DemoApp> docker compose -f docker compose -yaml up

You can see the network docker-compose created to communicate between containers by typing the command,

Docker Compose

Here “demoapp_default” is the network created by docker_compose. We can also look at the containers created ,

Docker Compose demoapp_default

You can see the two containers “demoapp-mongo-express-1”, “demoapp-mongodb-1” created here the docker_compose added its own prefix and suffixes to the name we mentioned in our yaml file

You can look at the mongo-express UI created from our container by typing “localhost:8000” in the browser,

Docker Compose mongo-express

From here we can connect to the mongodb and create databases and so on.

Now to stop and remove the containers and network created by the docker_compose file run the following command,

Docker Compose remove

Now you can check if the containers have been stopped,

Docker Compose commands

What are Docker Volumes?

Let’s say we have a database container. It has a virtual file system where the data is usually stored. So the data gets removed whenever we restart or remove the container. Hence there is no data persistence. This is where the usefulness of docker volumes comes in. Where a folder in the physical host file system is mounted into the virtual file system of docker so the data gets replicated in the host file system. Also the data gets populated from the host file system whenever you restart the container.

There are three different ways to define volume,
Note the path in the vessel changes upon the db image you use hence check the attestation for the applicable path for the applicable db image you use.

Host Volume: We can decide where on the host file system the reference is made

Ex: during the run command (-v /home/mount/data:/var/lib/mysql/data)

Anonymous Volumes:For each container a folder is generated by docker itself that gets mounted

Ex: during the run command(-v /var/lib/mysql/data)

Named Volumes:You can reference the volume by name for the folder generated by docker which is actually preferred by many

Docker volume locations:

The path of the docker volume differ with OS,

• Windows - d:\ProgramData\docker\volumes
• Linux and MacOS - /var/lib/docker/volumes

What is Docker Swarm?

There are several software options for container orchestration. Popular ones include kubernetes, Mesos and DockerSwarm.

Docker Swarm is a native vessel unity tool made by Docker Inc. It lets us coordinate how our holders run analogous to longshoreman compose, but targeted for product. This lets us run numerous vessel cases of our operation in resemblant- meaning our operation can sustain high situations of business.It can autoscale to changes in traffic.

Container Best Practices

The following are some of the best practices to follow when working with containers

• Containers should not hold permanent data
• Keep data off of the container.
• Containers should be disposable
• Containers should communicate internally whenever possible. Only exports ports if necessary
• Minimise image layers possible when writing dockerfiles

Conslusion

In conclusion, Docker is a powerful tool for creating, deploying, and managing containerized applications and has become an essential tool for modern software development and deployment

About Us

VS Online Services : Custom Software Development

VS Online Services has been providing custom software development for clients across the globe for many years - especially custom ERP, custom CRM, Innovative Real Estate Result, Trading result, Integration systems, Business Analytics and our own hyperlocale-commerce platformvBuy vBuy.in and vsEcom.

We've worked with multiple guests to offer customized results for both specialized and no specialized companies. We work closely with the stake holders and provide the best possible result with 100% successful completion To learn more about VS Online Services Custom Software Development Solutions or our product vsEcom please visit our SaaS page, Web App Page, Mobile App Page to know about the result handed. Have an idea or requirement for digital transformation? please write to us at siva@vsonlineservices.com

Let's develop your ideas into reality