Getting Up and Running Locally With Docker¶
The steps below will get you up and running with a local development environment. All of these commands assume you are in the root of your generated project.
Attention, Windows Users¶
Currently PostgreSQL (
psycopg2 python package) is not installed inside Docker containers for Windows users, while it is required by the generated Django project. To fix this, add
psycopg2 to the list of requirements inside
# Python-PostgreSQL Database Adapter psycopg2==2.6.2
Doing this will prevent the project from being installed in an Windows-only environment (thus without usage of Docker). If you want to use this project without Docker, make sure to remove
psycopg2 from the requirements again.
Build the Stack¶
This can take a while, especially the first time you run this particular command on your development system:
$ docker-compose -f local.yml build
Generally, if you want to emulate production environment use
production.yml instead. And this is true for any other actions you might need to perform: whenever a switch is required, just do it!
Run the Stack¶
This brings up both Django and PostgreSQL. The first time it is run it might take a while to get started, but subsequent runs will occur quickly.
Open a terminal at the project root and run the following for local development:
$ docker-compose -f local.yml up
You can also set the environment variable
COMPOSE_FILE pointing to
local.yml like this:
$ export COMPOSE_FILE=local.yml
And then run:
$ docker-compose up
To run in a detached (background) mode, just:
$ docker-compose up -d
Execute Management Commands¶
As with any shell command that we wish to run in our container, this is done using the
docker-compose -f local.yml run --rm command:
$ docker-compose -f local.yml run --rm django python manage.py migrate $ docker-compose -f local.yml run --rm django python manage.py createsuperuser
django is the target service we are executing the commands against.
(Optionally) Designate your Docker Development Server IP¶
DEBUG is set to
True, the host is validated against
['localhost', '127.0.0.1', '[::1]']. This is adequate when running a
virtualenv. For Docker, in the
config.settings.local, add your host development server IP to
ALLOWED_HOSTS if the variable exists.
Configuring the Environment¶
This is the excerpt from your project’s
# ... postgres: build: context: . dockerfile: ./compose/production/postgres/Dockerfile volumes: - local_postgres_data:/var/lib/postgresql/data - local_postgres_data_backups:/backups env_file: - ./.envs/.local/.postgres # ...
The most important thing for us here now is
env_file section enlisting
./.envs/.local/.postgres. Generally, the stack’s behavior is governed by a number of environment variables (env(s), for short) residing in
envs/, for instance, this is what we generate for you:
.envs ├── .local │ ├── .django │ └── .postgres └── .production ├── .caddy ├── .django └── .postgres
By convention, for any service
sI in environment
e (you know
someenv is an environment when there is a
someenv.yml file in the project root), given
sI requires configuration, a
.envs/.e/.sI service configuration file exists.
Consider the aforementioned
# PostgreSQL # ------------------------------------------------------------------------------ POSTGRES_HOST=postgres POSTGRES_DB=<your project slug> POSTGRES_USER=XgOWtQtJecsAbaIyslwGvFvPawftNaqO POSTGRES_PASSWORD=jSljDz4whHuwO3aJIgVBrqEml5Ycbghorep4uVJ4xjDYQu0LfuTZdctj7y0YcCLu
The three envs we are presented with here are
POSTGRES_PASSWORD (by the way, their values have also been generated for you). You might have figured out already where these definitions will end up; it’s all the same with
caddy service container envs.
One final touch: should you ever need to merge
.envs/production/* in a single
.env run the
$ python merge_production_dotenvs_in_dotenv.py
.env file will then be created, with all your production envs residing beside each other.
Tips & Tricks¶
Activate a Docker Machine¶
This tells our computer that all future commands are specifically for the dev1 machine. Using the
eval command we can switch machines as needed.:
$ eval "$(docker-machine env dev1)"
If you are using the following within your code to debug:
import ipdb; ipdb.set_trace()
Then you may need to run the following for it to work as desired:
$ docker-compose -f local.yml run --rm --service-ports django
In order for
django-debug-toolbar to work designate your Docker Machine IP with
When developing locally you can go with MailHog for email testing provided
use_mailhog was set to
y on setup. To proceed,
- make sure
mailhogcontainer is up and running;
- open up
Flower is a “real-time monitor and web admin for Celery distributed task queue”.
use_dockerwas set to
yon project initialization;
use_celerywas set to
yon project initialization.
By default, it’s enabled both in local and production environments (
production.yml Docker Compose configs, respectively) through a
flower service. For added security,
flower requires its clients to provide authentication credentials specified as the corresponding environments’
CELERY_FLOWER_PASSWORD environment variables. Check out
localhost:5555 and see for yourself.