How To Run Multi-Container Applications With Docker Compose

Docker revolutionized how we run applications by packing the application with the platform dependencies as containers.

While Docker is great for individual containers, real-world applications often require multiple containers working together, such as a web server, a database, and an application server.

Docker Compose simplifies this process by allowing us to define and manage multiple containers as a single unit.

Docker composes to run a web application with Flask and Postgres containers.

If you are new to Docker, read this article to familiarise yourself with Docker before starting with Docker Compose.

What this blog post is all about?

In this blog post, we’ll learn to use Docker Compose by building a practical example of a web application with multiple, interconnected containers.

This application will demonstrate how to create data from a front-end form, store it in a database, and display it on the web page. By the end of this blog post, you’ll have a solid understanding of how to use Docker Compose to orchestrate complex container-based applications.


What is Docker Compose?

Docker Compose is a tool for defining and running multi-container applications.

Docker Compose lets you define and run multi-container Docker applications using a single file, docker-compose.yml and run the application involving multiple services with just one commanddocker-compose up.

Why Docker Compose?

  • You define application services and their corresponding build options such as networks, volumes, and environment variables in docker-compose.yml .
  • All Docker services share the same network and can talk to each other internally. Think of services as part of an application such as front-end services, API services, DB services, etc.
  • we can build and run all of our services with a single commanddocker-compose up.
  • Since the whole application comprises one config file, we can easily share the application and anyone can use that to run the application. It can easily be stored in version control systems like Github and is easy to set up a CICD pipeline for the application.
Photo by Rod Long on Unsplash

Running multi-container Docker applications Docker Compose?

In this example, we will run a multi-container web application using Flask and Postgres.

  • The first container will be running a Postgres database
  • The second container will run the Flask-based web application to talk to the database container.

Note: You can find the code used in this blog in my Public Github Repo.

This is the tree structure of our application.

Paste the below commands in your terminal(Linux, Mac) to create the above folder structure, or create this file structure using VScode, or any other editor.

mkdir docker-compose
cd docker-compose
# create files/directory to store the code
touch docker-compose.yml requirements.txt app.py Dockerfile
mkdir -p static/css
touch static/css/style.css
mkdir templates
touch templates/index.html
  • Dockerfile — This will be used to build the web application image.
  • app.py — This will contain the Flask code for the web application we will build in this blog.
  • requirements.txt — This will contain the application dependencies.
  • docker-compose.yml — This is the docker-compose config file.
  • templates/index.html — This will contain the HTML code for our Flask application.
  • static/css/style.css — This will contain the CSS style code.

Writing code for this example.

Dockerfile

docker-compose.yml

  • Here we have defined two services, app, and db, both sharing the same network.
  • We have exposed app container port 5000 to host post 5000. Similarly, we have exposed the db container port 5432 to host port 5432.
  • App container depends on the availability of the DB container.
  • We have used a health check on the db container to ensure that DB is ready to take connect requests from the app container
  • We have used the named volume for the DB container.
  • We have used environment variables to store Postgres db creds and db names.

Normally we shouldn’t hardcode the credentials in the config file, there are many options to securely share the sensitive values that we will discuss later in the post.

docker-compose.yml


app.py, requirements.txt, index.html, style.css

  • The Flask app initializes with SQLAlchemy to connect the database.
  • Database URI set for PostgreSQL connection.
  • The data model is defined with id and name columns.
  • Routes defined for root URL handling both GET and POST requests, storing and rendering data.
  • There is some HTML and CSS that our application will use.

If you don’t hate to write docker-compose files, use the docker init command to do it for you.

Read this blog to learn how to auto-generate the docker and docker-compose config files with docker init command.


Run the multi-container application with Docker Compose.

You can copy the code in the file/folder structure I explained above.

If you are lazy like me, clone the code from the Public Github Repo.

git clone https://github.com/akhileshmishrabiz/Devops-zero-to-hero

cd Devops-zero-to-hero/AWS-Projects/multi-container-app-docker-compose

docker-compose up --build

IMPORTANT:

If you are following along with this blog post, make sure you have docker-compose installed on your local machine. The easiest way of doing this is by installing the Docker Desktop application, which installs all dependencies for Docker and Docker Compose.

If you are running it on a cloud Linux VM, such as Amazon EC2 running an amazon-linux2 OS, use the below steps to install the Docker and Docker-Compose.

Install Docker# Install docker
sudo yum update -y
sudo yum install docker -y
sudo systemctl enable docker
sudo systemctl start docker
sudo usermod -a -G docker ec2-user
#-> make sure to logout and login again to run docker command without sudo <-#
Install Docker Compose# Install docker compose on amazon linux
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Check if it is installed
docker-compose version

Now that your code and installation are sorted, run the multi-container application with just one command.

docker-compose up --build
docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7c99c9539298 flask-app-docker-compose-app "python app.py" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp flask-app-docker-compose-app-1
78f6a230ca24 postgres:latest "docker-entrypoint.s…" About a minute ago Up About a minute (healthy) 0.0.0.0:5432->5432/tcp flask-app-docker-compose-db-1

Since we are running this application on the local machine, we can use localhost(127.0.0.1) to access the application on port 5000. If you are running this on an EC2 machine with a public IP, use that IP to access the application.

Try adding some data to the database by filling out the form.

Now you have a working application with web and db container. You can connect to this Postgres DB from pgadmin or the dbviewer application.


You can stop/start the application with just one simple command. Since you are using docker volume, your data will persist during the application restart.

docker-compose down
docker-compose up

Using environment variables

As I said earlier, we should not hardcode the credentials in the config file, instead we can use the .env file to pass the DB credentials.

Create a .env file in the same location as your docker-compose.yml file. Remove the environment variables from the docker-compose.yml file and add them to the .env

Now you run the command docker-compose up and you have your application up and running.

However, one problem with this is that if you store your application code in Github, your credentials will be exposed. There should be a way of keeping the credentials separate from our code (Locally, where we are running docker-compose.

I will create a db-variables.env to store the db creds and pass this while running docker-compose

docker-compose –env-file db-variables.env up

That is all for this blog. See you on the next one.

Connect with me on Linkedin: https://www.linkedin.com/in/akhilesh-mishra-0ab886124/

Akhilesh Mishra

Akhilesh Mishra

I am Akhilesh Mishra, a self-taught Devops engineer with 11+ years working on private and public cloud (GCP & AWS)technologies.

I also mentor DevOps aspirants in their journey to devops by providing guided learning and Mentorship.

Topmate: https://topmate.io/akhilesh_mishra/