Skip to content

Commit

Permalink
Dev system at dev.uniresolver.io is using https by now.
Browse files Browse the repository at this point in the history
  • Loading branch information
peacekeeper committed Jun 16, 2021
1 parent b0b4ea7 commit a51b1cb
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 9 deletions.
1 change: 0 additions & 1 deletion docs/continuous-integration-and-delivery.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ The second step takes the image and deploys it (create or update) to the configu

## Open Issues regarding CI/CD

* Use httpS for the dev-environment at http://dev.uniresolver.io
* Make all drivers accessible via sub-domains eg. elem.dev.uniresolver.io
* Bundle new release for resolver.identity.foundation (only working drivers)
* Render Smoke Test results as HTML-page and host it via gh-pages
Expand Down
12 changes: 4 additions & 8 deletions docs/dev-system.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,29 +2,27 @@

The dev-system or the sandbox installation, which runs the latest code-base of the Universal Resolver project, is hosted at:

http://dev.uniresolver.io
https://dev.uniresolver.io

The drivers are exposed by their subdomains (DID method-names). For example: btcr.dev.uniresolver.io
> Note for driver-developers: The subdomains are automatically generated based on the Docker image tag e.g.: driver-did-btcr, which consequently must have the DID-method name as part of the tag-name (pattern: driver-di-`<DID method name>`).
DIDs can be resolved by calling the resolver:

http://dev.uniresolver.io/1.0/identifiers/did:btcr:xz35-jznz-q6mr-7q6
https://dev.uniresolver.io/1.0/identifiers/did:btcr:xz35-jznz-q6mr-7q6

or by directly accessing the driver’s endpoint:

http://btcr.dev.uniresolver.io/1.0/identifiers/did:btcr:xz35-jznz-q6mr-7q6

https://btcr.dev.uniresolver.io/1.0/identifiers/did:btcr:xz35-jznz-q6mr-7q6

The software is automatically updated on every commit and PR on the master branch. See [CI-CD](/docs/continuous-integration-and-delivery.md) for more details


Currently the system is deployed in the AWS cloud by the use of the Elastic Kubernetes Service (EKS). Please be aware that the use of AWS is not mandatory for hosting the resolver. Any environment that supports Docker Compose or Kubernetes will be capable of running an instance of the Universal Resolver.

We are using two `m5.large` instances (2 vCPU / 8GB RAM) due to the limitations of 29 pods per instance of this type on AWS EKS. This should not be treated as a recommendation, just an information.

## AWS Architecture

This picture illustrates the AWS architecture for hosting the Universal resolver as well as the traffic-flow through the system.

<p align="center"><img src="figures/aws-architecture.png" width="75%"></p>
Expand All @@ -35,5 +33,3 @@ The Kubernetes cluster is spanned across multiple Availability Zones (AZ), which

If containers, like DID-Drivers, are added or removed, the ALB ingress controller https://kubernetes-sigs.github.io/aws-alb-ingress-controller/ takes care of notifying the ALB. Due to this mechanism the ALB stays aware of the system state and is able to keep traffic-routes healthy.
By use of https://github.com/kubernetes-sigs/external-dns the DNS Service Route 53 is updated. Further details regarding the automated system-update are described at [CI-CD](/docs/continuous-integration-and-delivery.md).


0 comments on commit a51b1cb

Please sign in to comment.