Skip to content

Commit

Permalink
Update README.md (#192)
Browse files Browse the repository at this point in the history
  • Loading branch information
denis-yuen authored Apr 26, 2021
1 parent 36c8ddc commit 80a6669
Showing 1 changed file with 7 additions and 50 deletions.
57 changes: 7 additions & 50 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,15 @@ Log issues and see general documentation at [dockstore](https://github.com/ga4gh
Port 80 is exposed over http. This port should not be exposed to the public. A separately [configured load
balancer](https://github.com/dockstore/dockstore-deploy) is responsible for SSL termination and forwarding traffic to this instance. Previously this repo handled the SSL termination with nginx and LetsEncrypt.

If you are looking for how to run Dockstore locally as a developer, you are probably in the wrong place and should take a look at https://github.com/dockstore/dockstore/blob/develop/docker-compose.yml

## Prerequisities

1. Tested on Ubuntu 16.04.3 LTS
1. At least 20GB of disk space, 4GB of RAM, and two CPUs
1. Tested on Ubuntu 20.04
1. At least 20GB of disk space, 16GB of RAM, and 4 CPUs
1. Docker setup following [https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/](https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/) including the post-installation steps for running without sudo
1. The running Dockstore website will require ports 80 and 443 by default
1. A client id and client secret for each of the integrations you wish to setup, github and quay.io as a minimum probably. You will need client ids and secrets for each integration as documented at the [Dockstore Primer](https://wiki.oicr.on.ca/display/SEQWARE/Dockstore+Primer#DockstorePrimer-SettingupDockstoreonyourcomputerfordevelopment(AssumingUbuntu)).
1. A client id and client secret for each of the integrations you wish to setup, github and quay.io as a minimum probably. You will need client ids and secrets for each integration as documented on the internal [wiki](https://wiki.oicr.on.ca/display/DOC/OAuth+Apps+and+Other+3rd+Party+Registration).

## Usage

Expand All @@ -32,59 +34,14 @@ rebuild your docker images without affecting existing running containers

4. After following the instructions in the bootstrap script and starting up the site with `docker-compose`, you can browse to the Dockstore site hosted at port 443 by default. `https://<domain-name>` if you specified https or `http://<domain-name>:443` if you did not.

5. Note that the following volumes are created, `composesetup_esdata1` for ephermeral elastic search data, `composesetup_log_volume` for logging, and `composesetup_ui2_content` for storing the built UIs before they are handed off the nginx for service.
The current setup relies upon an externally hosted database (currently AWS RDS) and externally hosted search (currently AWS Elasticsearch).

6. For database backups, you can use a script setup in the cron for the host

```
@daily (echo '['`date`'] Nightly Back-up' && /home/ubuntu/compose_setup/scripts/postgres_backup.sh) 2>&1 | tee -a /home/ubuntu/compose_setup/scripts/ds_backup.log
```

This relies upon an IAM role for the appropriate S3 bucket. You will also need the AWS cli installed via ` sudo apt-get install awscli`. Note that this may not be readily apparent since a cron has a limited $PATH and it seems easy to accidentally get the awscli installed for specific users.

### Loading Up a Database ###

The docker-compose setup uses a mount from the host to keep the postgres database persistent (which is different from elastic search which is not)

However, this does require a convoluted way to add content to the DB as follows

```
docker-compose down
# needed since dropping the schema can still leave some user information behind
sudo rm -Rf postgres-data/
nohup docker-compose up --force-recreate --remove-orphans &
docker cp /tmp/backup.sql <container>:/tmp
docker exec -ti <container> /bin/bash
su - postgres
psql
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
CREATE USER dockstore WITH password 'dockstore';
ALTER DATABASE postgres OWNER to dockstore;
ALTER SCHEMA public OWNER to dockstore;
\quit
psql postgres -U dockstore -f /tmp/backup.sql
# exit container (ctrl+d) and then run migration using newly loaded DB
docker-compose down
nohup docker-compose up --force-recreate --remove-orphans &
```

Loading up a database is usually not necessary since AWS RDS is persistent. Refer to https://github.com/dockstore/dockstore-deploy#database-setup

Note that database migration is run once during the startup process and is controlled via the `DATABASE_GENERATED` variable. Answer `yes` if you are working as a developer and want to start work from scratch from an empty database. Answer `no` if you are working as an administrator and/or wish to start Dockstore from a production or staging copy of the database.

### Modifying the Database ###

If direct modification of the database is required, e.g, a curator needs to modify the value of some row/column that is not accessible via the API, you can use the same steps as above, except for dropping the schema.

This should be exercised with extreme caution, and with someone looking over your shoulder, as you have the potential to unintentionally overwrite or delete data. If you wish to proceed:

```
# Assuming you copied a file `fix.sql` to the /tmp directory:
docker cp /tmp/fix.sql <container>:/tmp
docker exec -ti <container> /bin/bash
su - postgres
psql -f /tmp/fix.sql
```

## Logging Usage

Expand Down

0 comments on commit 80a6669

Please sign in to comment.