This is the simplest configuration for developers to start with.
- Run
docker-compose run --rm django ./manage.py migrate
- Run
docker-compose run --rm django ./manage.py createsuperuser
and follow the prompts to create your own user - Create a virtual environment and run
pip install --find-links https://girder.github.io/large_image_wheels -e .[worker]
inside of it.
Note: Even though most of the application will be run with docker-compose
, the celery
worker must still be run natively, as it itself makes use of docker
.
To run the application, do the following:
- Run
docker-compose up
- In a seperate window, activate the virtual environment created previously in "Initial Setup" and run the following commands:
source ./dev/export-env.sh
celery --app danesfield.celery worker --loglevel INFO --heartbeat-interval 60
- Access the site, starting at http://localhost:8000/admin/
- When finished, use
Ctrl+C
Occasionally, new package dependencies or schema changes will necessitate maintenance. To non-destructively update your development stack at any time:
- Run
docker-compose pull
- Run
docker-compose build --pull --no-cache
- Run
docker-compose run --rm django ./manage.py migrate
This configuration still uses Docker to run attached services in the background, but allows developers to run Python code on their native system.
- Run
docker-compose -f ./docker-compose.yml up -d
- Install Python 3.8
- Install
psycopg2
build prerequisites - Create and activate a new Python virtualenv
- Run
pip install -e .[dev]
- Run
source ./dev/export-env.sh
- Run
./manage.py migrate
- Run
./manage.py createsuperuser
and follow the prompts to create your own user
- Ensure
docker-compose -f ./docker-compose.yml up -d
is still active - Run:
source ./dev/export-env.sh
./manage.py runserver
- Run in a separate terminal:
source ./dev/export-env.sh
celery --app danesfield.celery worker --loglevel INFO --heartbeat-interval 60
- Run in a seperate terminal:
cd client/
yarn
yarn run serve
- When finished, run
docker-compose stop
Attached services may be exposed to the host system via alternative ports. Developers who work on multiple software projects concurrently may find this helpful to avoid port conflicts.
To do so, before running any docker-compose
commands, set any of the environment variables:
DOCKER_POSTGRES_PORT
DOCKER_RABBITMQ_PORT
DOCKER_MINIO_PORT
The Django server must be informed about the changes:
- When running the "Develop with Docker" configuration, override the environment variables:
DJANGO_MINIO_STORAGE_MEDIA_URL
, using the port fromDOCKER_MINIO_PORT
.
- When running the "Develop Natively" configuration, override the environment variables:
DJANGO_DATABASE_URL
, using the port fromDOCKER_POSTGRES_PORT
DJANGO_CELERY_BROKER_URL
, using the port fromDOCKER_RABBITMQ_PORT
DJANGO_MINIO_STORAGE_ENDPOINT
, using the port fromDOCKER_MINIO_PORT
Since most of Django's environment variables contain additional content, use the values from
the appropriate dev/.env.docker-compose*
file as a baseline for overrides.
tox is used to execute all tests.
tox is installed automatically with the dev
package extra.
When running the "Develop with Docker" configuration, all tox commands must be run as
docker-compose run --rm django tox
; extra arguments may also be appended to this form.
Run tox
to launch the full test suite.
Individual test environments may be selectively run. This also allows additional options to be be added. Useful sub-commands include:
tox -e lint
: Run only the style checkstox -e type
: Run only the type checkstox -e test
: Run only the pytest-driven tests
To automatically reformat all code to comply with
some (but not all) of the style checks, run tox -e format
.
The following Django management scripts can be executed using ./manage.py <name_of_script>
.
To view help text for any of them, run with the --help
flag.
Populates the database with a sample dataset as well as 3D tile output from a successful run on that dataset.
Creates a django-oauth-toolkit
Application
so the Vue SPA can login.
Takes as input a path to a directory. The directory is ingested as a Dataset
, with every file in the directory
added recursively as a ChecksumFile
. Any "special" file types such as rasters, meshes, 3d tiles, or FMVs are
automatically ingested as their corresponding RGD models.