Nickolai Tchesnokov

FastAPI + Docker Setup (Dev & Production)

< fastapi, docker, poetry, python />

Setting up a basic Docker container for a FastAPI project is relatively straightforward, with many examples already available anywhere you look. However, in reality, this simple setup is often insufficient and not ideal, requiring alot of additional customisation to suit specific use cases.

In this post, I will guide you through an example setup, detailing some steps I took to debug and refine my configuration to ensure everything works as intended, with the final version of the code being at the bottom of this post.

Note: Everything here is performed in a macOS environment (M1). Specific version used: Docker Desktop 4.36.0, Poetry 1.8.4, Python 3.12.6.

Basic Implementation

The following is a barebones implementation for containerising an application using a basic Python virtual environment.

.venv
app
└── main.py 
Dockerfile
.dockerignore
requirements.txt
FROM python:3.12.6

WORKDIR /code 

COPY ./requirements.txt ./requirements.txt

RUN pip install --no-cache-dir --upgrade -r ./requirements.txt

COPY ./app ./app

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
# or CMD ["fastapi", "run", "app/main.py", "--port", "8000"]

This is covered extensively in the FastAPI documentation.

Now we can just build the image by running docker build -t my-image . and we have built our first working image.

To create and run a container we can do something like docker run -d --name my-container -p 8000:8000 my-image and voilà, we are finished. To view the details of this running container simply use docker ps to list running containers (to stop the container, use docker stop my-container).

Its important to note here that we are only copying over the fastapi application i.e. ./app, rather than doing something like COPY . . which would just copy the entire root project i.e. copying useless things that would unnecessarily bloat the image like requirements.txtDockerfile & .venv that do not need to be there.

To check the structure of our container, we can access the container’s shell:

docker exec -it my-container sh

This will place you in the default working directory of the container, in our case /code. Now we can navigate through our container and check we have what we need. In our example this is what it looks like:

/code
├── app 
│   └── main.py
└── requirements.txt

Selecting the Base Image

Taking a look at our current image using docker images we will see something like this:

REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
my-image     latest    8c52308f520a   9 minutes ago   1.04GB

The image seems to be unnecessarily large. This is due to the base image being used, python:3.12.6.

The size of an image is largely determined by the base image chosen and the layers added during the build process. Docker provides several Python base image types. Here is a brief overview of the ones available to us:

Base OS Descriptor Packet Manager Shell Link
Bookworm Debian 12 Full-featured, latest apt /bin/bash ->
Bullseye Debian 11 Full-featured, previous apt /bin/bash ->
Alpine Alpine Linux Lightweight & Minimal apk /bin/sh ->

Using these to build our image we get the following:

Base image my-image build size
python:3.12.6-bookworm 1.04GB
python:3.12.6-bullseye 896MB
python:3.12.6-slim 171MB
python:3.12.6-slim-bookworm 171MB
python:3.12.6-slim-bullseye 136MB
python:3.12.6-alpine 74.7MB

The slim images here are essentially just stripped versions of the Debian-based (bookworm/bullseye) variants that aim to reduce size by eliminating unnecessary components. Therefore, by using one of these lighter base images, we can reduce our image size dramatically.

However, its good to know what these “slim” images are actually stripping out. In general, these images strip out dev tools, compiler and build tools, header files, debugging utilities, dev libraries, etc. Some examples of these would be gcc, make, curl, git, wget. We can see what the difference is by outputting the number of commands available to us inside the container using compgen -c | wc -l:

  • Bookworm: 1711 commands
  • Slim Bookworm: 509 commands

Its useful to know this because there may be instances where you or the dependencies in your app may require a number of these tools for certain tasks. For example, there are python dependencies that rely on gcc or g++ (C and C++ compilers) to compile source code natively during installation. These are often libraries for machine learning or scientific computing.

Note: Common libraries like numpy, pandas, tensorflow that contain C or C++ code often provide precompiled wheels for common platforms, so tools like gcc are not required. However, if precompiled wheels aren’t available, or you want to directly install from source, native tools like gcc are of course required.

In the majority of cases though, it is better to use a slim image anyway and simply install anything you may need specifically like gcc.

So, in this case, lets update our original Dockerfile to use the latest slim Debian python image available; bookworm:

FROM python:3.12.6-slim-bookworm

# Install gcc, g++
# --no-install-recommends : ignore optional extras
# rm -rf /var/lib/apt/lists/* : clean up package metadata
RUN apt-get update && apt-get install -y --no-install-recommends \
	gcc g++ \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /code 

COPY ./requirements.txt ./requirements.txt

RUN pip install --no-cache-dir --upgrade -r ./requirements.txt

COPY ./app ./app

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Of course, if no extra installations are needed then you can just ignore the extra installation step entirely. Instead of building a 1GB image, it is now reduced to:

  • 171MB (or 418MB with the extra installations)

Multistage Build Implementation

Using a slim image as a base is already a great way to reduce image size. However, even with a slim image, if dependencies are compiled directly within the final image, or if you have other build steps such as running tests, you will end up with unnecessary artifacts (e.g., build tools, tests, temporary files) in your runtime container.

A multistage build allows us to define separate stages in the same Dockerfile, each with its own purpose. In our case, we can split our current Dockerfile into two stages:

  1. A builder stage where dependencies are installed.
  2. A runtime stage where we copy over the installed dependencies needed to run the application.

This allows us to create a final image which exclusively builds only what is necessary for the application to run, with all intermediary build steps run separately.

An example of this:

# Stage 1: Builder 
FROM python:3.12.6-slim-bookworm AS builder

# Install gcc, g++
# --no-install-recommends : ignore optional extras
# rm -rf /var/lib/apt/lists/* : clean up package metadata
RUN apt-get update && apt-get install -y --no-install-recommends \
	gcc g++ \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /code 

COPY ./requirements.txt ./requirements.txt

# Install dependencies in a virtual environment 
RUN python -m venv ./venv && \
	./venv/bin/pip install --no-cache-dir --upgrade pip && \
	./venv/bin/pip install --no-cache-dir -r ./requirements.txt

COPY ./app ./app

# Additional build steps here e.g.:
# - static code analysis or linting
# - tests
# - static file bundling/optimisation

# Stage 2: Final runtime
FROM python:3.12.6-slim-bookworm

WORKDIR /code

# Copy only the virtual environment and application code from the builder stage
COPY --from=builder /code/venv ./venv
COPY --from=builder /code/app ./app

# Update PATH to use the virtual environment
ENV PATH="/code/venv/bin:$PATH"

# Run the application
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Directly comparing our previous Dockerfile, the specific optimisation we made was making sure that we are only installing the extra build dependencies (i.e. gcc) in the builder, so it is not included in the final runtime image. This reduces the final image from 418MB to 174MB.

Global vs. Virtual Dependencies

Unlike in the previous Dockerfiles, the new one above is using a virtual environment rather than installing the application dependencies globally. Why?

We could have installed the dependencies globally in the builder stage, and then copied them over to runtime using this:

COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages 
COPY --from=builder /usr/local/bin /usr/local/bin

This approach would also work fine for many scenarios, but I find it slightly annoying for multiple reasons:

  • Installed dependencies are kept across those system-level directories, which makes it more difficult to keep track of what exactly is being copied and can be more error-prone.
  • Depending on your configuration, global paths such as /usr/local/bin might already contain other binaries or tools you may want to utilise, causing conflicts.

In general, using a virtual environment within our image will just provide that extra bit of isolation, predictability and reproducibility for copying between builds, running the application, and debugging, even if the image’s soul purpose is to to run the Fastapi application in isolation already.