A lot of the tutorials out there make it look like it's trivial to set up a development environment with Django, Postgres and Docker. That never quite matched my experience; you always end up knowing too much about docker, and there are a few gotchas that most people typically fail to mention. The following are a few specifically related to Postgres and Django, which I'm writing here because I tend to forget them every time I start a new project...
A running container is not a running database
Docker-compose will happily report a container as "up" even though it's busy doing init work. With postgres, this means that a container might look "up" when really it's still creating the actual db instance, so app connections might well fail.
A good workaround is to use pg_isready
, like this :
#!/usr/bin/env sh docker-compose up until pg_isready -d your_pg_db -h your_pg_host \ -p your_pg_port -U your_pg_superuser do echo "Waiting for db to be available..." sleep 2 done # now we can do actual work, like db migrations ...
Don't run; exec
A lot of howtos state, more or less, "if you want to run something in an instance, use docker-compose run some_machine some_command
". This is misleading. run
will create a new ancillary container, which will run in parallel to any other container of the same type that might already be up. If you want to execute an ancillary process inside an already-running container, use docker-compose exec some_machine some_command
instead. This will ensure you are "logged on" the running container.
While coding, don't copy; mount
Many will tell you that you need to ensure reproducibility; and as such, your code should be copied or checked out to the instance in Dockerfile, i.e. at build stage. That is a huge drag on development, since you need to rebuild the whole image on every minor change. It is annoying and slow even with multi-stage builds.
Instead, you can mount your actual source directory as a volume, and exploit all the goodies that make development tolerable, like Django's autoreload features. Make your docker-compose.ymllook like this instead:
services: your_app_machine: volumes: - type: bind source: /host/location/of/src target: /container/location/of/app ...
When you want to run tests or go to production, use a second Dockerfile that inherits from the first (with FROM) and actually copies data (or more likely checks it out via git), without the volume definition.
Know your tools
This is not really specific to docker! Master your tools in depth, it will help. I honestly didn't know that JetBrains PyCharm can now configure the interpreter running in a Docker container as the main one for the project, which makes a lot of things easier (debugging, REPL etc). Extremely helpful!
I would add another alternative for checking for the database:
ReplyDeleteversion: '3.4'
services:
app:
(...)
depends_on:
- db
db:
(...)
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dbuser dbname"]
interval: 5s
timeout: 2s
retries: 30
usually, that depends_on verifies only if the container is running, but if you combine it with the heathcheck, it gives you what you want without extra dependencies or complexities.
But nice post :)