13 Deploying Flask to Production¶
Hi people! Has this message always bothered you while running Flask apps?
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
In this chapter we will learn how to:
Use a production WSGI server
Use Nginx as the proxy server
Dockerize the app
By default when you run the app using the default out-of-the-box server supplied by Flask, only one process is launched. This process can handle only one connection at a time. This means that whenever a second person tries to access your website, he/she will have to wait until the first person has been responded by the Flask server. This greatly limits the number of concurrent requests which the server can cater to. The server was packaged with Flask just so that users can start web app development as soon as possible.
We will be using Docker because it helps to create a reproducible environment which can be replicated on any system. This makes sure that there are no issues with mismatched Python versions. This also allows you to use different Python versions for different projects much more easily.
13.1 Basic Flask App¶
In other chapters of this book, you will be creating full-blown web apps using Flask. But for the sake of demonstration, I will be using the most basic Flask app example which is available on the Flask website:
1 2 3 4 5 6 | from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello World!" |
Save this file as hello.py
. Now you can run this app by typing the
following command in the terminal:
FLASK_APP=hello.py flask run
Note
Make sure you are in the folder which holds the hello.py
file before you run the above command.
13.2 Container vs Virtual Environment¶
I would also like to talk a little bit about containers vs virtual environments. Containers and virtual environments cater to different problems altogether but this question is still a common one among people who are new to containers or have never used them before. So far in each chapter, I start with the creation of a virtual environment.
Virtual environments allow you to use different versions of dependencies for different Python projects. This is well and good but you are still stuck with the Python version installed in your Operating System. You can create as many virtual environments as you want on your system and install different versions of dependencies but what happens when you want to run four Python programs, each requiring a different version of Python? There is the pyenv project as well as some other similar projects to do this but I like the containerization approach a lot more.
Docker makes use of operating-system-level virtualization and allows you can install whatever you want. You can have a different version of Python in each different container. This allows you to maintain legacy programs as well.
Another important thing to note here is that if you are using Docker containers to develop and test your apps you can completely get rid of virtual environments if you want! This is because each container is isolated so you don’t have to worry about polluting the system-wide Python packages folder but just because starting up a container takes a couple of seconds, I prefer using virtual environments for development and containers for production.
Now that we know the difference between these two technologies, let’s investigate a production level WSGI server.
13.3 UWSGI¶
UWSGI is a production WSGI server. We will be using that to serve our app. Let’s install uWSGI first:
$ pip3 install uwsgi
We can run uWSGI using the following command:
$ uwsgi --http :8000 --module hello:app
This will tell uWSGI to serve on port 8000
. You can open up https://localhost:8000
in your browser and you should be greeted with the infamous “Hello World!”.
You can pass in a lot of different options to uWSGI via the commandline but to make the execution reproducable it is always preferred to put your configuration into a config file. Let’s create a uwsgi.ini
file and add the configuration to it:
1 2 3 4 5 6 7 8 9 10 11 | [uwsgi] module = hello:app uid = www-data gid = www-data master = true processes = 4 socket = /tmp/uwsgi.socket chmod-sock = 664 vacuum = true die-on-term = true |
There are a bunch of things happening here. Let me break them down:
Line 2: we tell uwsgi to run the
app
module from thehello
fileLine 3-4: uid means userid and gid means group id. Normally on servers, a low privileges user is used to run the app. This user has reduced privileges so that even if the app gets hacked, the impact can be contained
Line 5: According to the official uWSGI docs:
uWSGI’s built-in prefork+threading multi-worker management mode, activated by flicking the master switch on. For all practical serving deployments, it is generally a good idea to use master mode.
Line 6: 4 processes will be launched to serve requests (you can also add an
threads
option which will launch multiple threads linked with each process)Line 7: This creates a socket which will be referred to later in NGINX. We could have just served this over a TCP port connection but the sockets approach has a lower over-head
Line 8: Sets the permissions for this socket
Line 9: This makes sure uWSGI try to remove all of the generated files/sockets upon exit
Line 11: makes sure the server dies on receiving a SIGTERM signal
You can take a look at a host of other configuration options on the uWSGI documentation website.
Now you can run uWSGI using the configuration file like this:
$ uwsgi --ini uwsgi.ini
This will output a whole bunch of text in the terminal:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | [uWSGI] getting INI configuration from uwsgi.ini *** Starting uWSGI 2.0.17.1 (64bit) on [Fri Jan 4 20:13:46 2019] *** compiled with version: 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1) on 04 January 2019 23:43:39 os: Darwin-16.7.0 Darwin Kernel Version 16.7.0: Sun Oct 28 22:30:19 PDT 2018; root:xnu-3789.73.27~1/RELEASE_X86_64 nodename: Yasoob-3.local machine: x86_64 clock source: unix pcre jit disabled ... ... *** uWSGI is running in multiple interpreter mode *** spawned uWSGI master process (pid: 26298) spawned uWSGI worker 1 (pid: 26299, cores: 1) spawned uWSGI worker 2 (pid: 26300, cores: 1) spawned uWSGI worker 3 (pid: 26301, cores: 1) spawned uWSGI worker 4 (pid: 26302, cores: 1) |
We are currently using 4 processes to serve our app. The default server shipped with Flask uses only one. This allows our app to serve more requests concurrently (uWSGI also uses one worker process by default).
When you run uWSGI, it runs under the privileges of the user which runs it. However, in Docker, it will be run under root so that is why we told uWSGI to downgrade the processes to www-data
(a userid used by most web servers).
Currently, we are telling uWSGI to respond to requests which it receives via the /tmp/uwsgi.socket
. Now we need to set-up a proxy server to route incoming requests to that socket.
13.4 NGINX¶
You might be wondering why we need NGINX. After all, uWSGI itself is a very capable production-quality web server. The short answer is that you don’t necessarily need to use NGINX. We can configure uWSGI to serve incoming requests on port 80/443 directly.
The long answer is that NGINX and Apache have been out there for a lot longer than uWSGI and are used in production a lot more. This means that they are more mature and are capable of some things which are not possible in uWSGI as of right now. You can use NGINX on a different server and reverse proxy requests for dynamic content to a load-balanced cluster and serve static files using NGINX. You can cache your dynamic endpoints more efficiently and reduce the overall load even further. This becomes a big consideration if the app you are working on needs to scale.
Now that you know why you might want to use NGINX in production, here’s an NGINX config file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | user www-data; worker_processes auto; pid /run/nginx.pid; events { worker_connections 1024; multi_accept on; } http { access_log /dev/stdout; error_log /dev/stdout; include /etc/nginx/mime.types; default_type application/octet-stream; index index.html index.htm; server { listen 80 default_server; listen [::]:80 default_server; server_name localhost; root /var/www/html; location / { include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.socket; } } } |
We tell NGINX to degrade to the www-data
user. It will automatically figure out how many processes to run. worker_connections
refers to how many requests can be simultaneously handled. Instead of guessing this number, you can check out your system’s core’s limitations by running this command:
$ ulimit -n
1024 is a safe limit. We pipe the access and error logs to stdout
so that we can access them using the docker logs
command.
We tell NGINX to route all requests to /
to the uwsgi socket we created. Save this file with the name of nginx.conf
.
13.5 Startup Script¶
We need a start-up script which will be run when our container starts. This script will start NGINX and our uWSGI server. The contents of this file will be simple:
#!/usr/bin/env bash
service nginx start
uwsgi --ini uwsgi.ini
Save this as start-script.sh
.
Note
Windows users may wonder if this start-script.sh
will work for them. It will when they run this in Docker on Windows! That’s the beauty of containerization, it allows me to run Linux and readers to run Windows and Mac and all of us to run the same Docker configuration.
We also need a requirements.txt
file which will contain the Python packages needed to run our app:
Flask==1.0.2
uWSGI==2.0.17.1
13.6 Docker File¶
The final major step left is to create our Dockerfile
. This will dictate how our container will be made. Instead of starting from scratch, we can use different Dockerfile
as a base. In our case, we will be using the Python 3.8-slim as our base. This means that we don’t have to care about installing Python in the container. It will already be installed. We can just install all the extra stuff we need other than Python.
Create a file named Dockerfile
in your project folder and start editing it:
FROM python:3.8-slim
The base image only comes with Python 3.8 installed. We need to install NGINX and some other useful Python packages ourselves:
1 2 3 4 5 6 | RUN apt-get clean \ && apt-get -y update RUN apt-get -y install nginx \ && apt-get -y install python3-dev \ && apt-get -y install build-essential |
Now we need to copy the contents of the current directory into the new container and cd
into the directory:
COPY . /flask_app
WORKDIR /flask_app
Now we need to install the packages from the requirements.txt
file:
RUN pip install -r requirements.txt --src /usr/local/src
NGINX requires its config file to be present in a specific directory. So we need to move the NGINX config into that directory, give it proper execution rights, and set up our start-script.sh
to run as soon as the container starts:
COPY nginx.conf /etc/nginx
RUN chmod +x ./start-script.sh
CMD ["./start-script.sh"]
The final Dockerfile is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | FROM python:3.8-slim RUN apt-get clean \ && apt-get -y update RUN apt-get -y install nginx \ && apt-get -y install python3-dev \ && apt-get -y install build-essential COPY . /flask_app WORKDIR /flask_app RUN pip install -r requirements.txt --src /usr/local/src COPY nginx.conf /etc/nginx RUN chmod +x ./start-script.sh CMD ["./start-script.sh"] |
Now our Dockerfile is complete and we can build an image using it. We can build the image by running the following command:
$ docker build . -t flask_image
This creates an image using the Dockerfile
in our current directory and tags it with the name flask_image
.
Now we can run a container using that generated image by running the following command:
$ docker run -p 80:80 -t flask_image
Options explanation:
-p 80:80
tells Docker to route all incoming requests on port 80 on the host to this container-t flask_image
tells Docker to run this container using the image tagged with the nameflask_image
Note
The initial image build will take some time but consequent builds should be instantaneous as Docker caches the resources it downloads.
If everything works as expected, you can open up localhost
in your browser and you will be greeted with Hello World!
13.7 Persistence¶
We want our docker container to run even when we have closed the terminal or even if a container has crashed or a system restart has taken place. In order to do that we can modify our docker run
command like this:
$ docker run -d --restart=always -p 80:80 -t flask_image
-d
tell docker to run this container in detached mode so that even when we close the terminal the container will keep on running. We can view the logs usingdocker logs
command--restart=always
tells Docker to restart the container if it shuts down/crashes or system restarts
Now that you know how to manually install NGINX and uWSGI and use them with Docker, you can base your future builds on those Docker images which have NGINX and uWSGI preinstalled. An example is tiangolo/uwsgi-nginx-flask:flask.
13.8 Docker Compose¶
We can improve our container architecture by decoupling NGINX and uWSGI into separate containers and using Docker Compose
to run these different containers. This is what most companies do in production environments.
I am not going to cover Docker Compose in this chapter. A major reason for not explaining the working of Docker Compose in this chapter is that most of the applications you will be developing in this book do not require it to run and the current configuration should suffice for most tasks. If you want to explore Docker Compose on your own, the official documentation are a good place to begin.
13.9 Troubleshooting¶
The most common error you can get while trying to run your docker container is that the port is already in use:
docker: Error response from daemon: driver failed programming external connectivity on endpoint adoring_archimedes (70128ed39b1451babbe50db7e436ab28a966576ed4a9637a2314568ff4e6a74c): Bind for 0.0.0.0:80 failed: port is already allocated.
ERRO[0000] error waiting for container: context canceled
Make sure that no other docker container is running by running the $ docker ps
command. This command lists all the containers which are running in detached mode. The output will be something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba9934828900 flask_image "./start-script.sh" 11 minutes ago Up 11 minutes 0.0.0.0:80->80/tcp tender_panini
You can kill the container using this command:
$ docker kill tender_panini
Here tender_panini
is the name of the container. You can also supply custom names using the --name
option while running the $ docker run
command.
If this doesn’t resolve your issue, make sure no uWSGI or NGINX is running on the host system and listening on the 80
port. On Mac OS you can find this information easily by running:
$ lsof -i:80
This will tell you the PID
of the process using port 80. Let’s say the PID is 1337. You can then go ahead and kill it:
$ kill 1337
If nothing else works, try flexing your Googling muscles and search for the issue online. Most of the time you will find the solution online.
13.10 Next Steps¶
In this chapter, we learned how to deploy Flask apps using Docker. Docker has a huge ecosystem so the next best step would be to explore what else you can do with Docker. You should explore Kubernetes and see how you can set up a basic deployment using Kubernetes. I personally just use vanilla Docker to deploy my apps but a lot of people like using Kubernetes. There are also various ways hacker indirectly use Docker to exploit an operating system so it is beneficial to spend some time exploring that side of Docker as well. This way you can learn about the limits of Docker and can keep your data and operating system safe.
Even though we mainly focused on the benefits of Docker, there are various reasons for staying away from Docker as well. For our use-cases, we don’t need to bother with what these reasons are but it is good to know that Docker is not a silver bullet for all of your deployment issues.