Skip to content
This repository has been archived by the owner on Apr 13, 2023. It is now read-only.

Commit

Permalink
feat!: Update to Solutions version v4.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
zheyanyu committed Nov 22, 2021
1 parent 8e4dc37 commit c9f2bc6
Show file tree
Hide file tree
Showing 364 changed files with 94,708 additions and 17,442 deletions.
60 changes: 59 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,65 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [2.1.3] - 2021-05-03
## [4.0.0] - 2021-09-27
### ⚠ BREAKING CHANGES

* The Cognito `IdToken` is now used instead of the `AccessToken` to authorize requests.

* Multi-tenancy itself is not a breaking change, you can continue to use FHIR works on single-tenant mode
by setting the `enableMultiTenancy` to false.

* However, note that updating an existing (single-tenant) stack to enable multi-tenancy is a breaking change. Multi-tenant
deployments use a different data partitioning strategy that renders the old, single-tenant, data inaccessible.

* FWoA now reads/writes Elasticsearch documents from aliases instead of indexes. This change simplifies performing re-indexing operations without downtime.
Aliases are automatically created when resources are written to Elasticsearch, but read operations may fail for existing deployments if the aliases do not exist already.
* Please send 1 update/create request on each resource type existed already to get the aliases created.

### Features

* Implement multi-tenancy and group export

* Multi-tenancy allows a single `fhir-works-on-aws` stack to serve as multiple FHIR servers for different tenants.
Check out our [multi-tenancy documentation](source/USING_MULTI_TENANCY.md) for more details.

* Use alias for all ES operations
* **interface:** add logging framework
* **routing:** Support POST based search
* **search:** Support number and quantity search syntax
* **search:** Allow repeated search parameters a.k.a AND search parameters
* **search:** Allow sorting by date type parameters
* **search:** Support searching on Period type fields with date type params
* Add DLQ for ddbToEs sync failures
* Search now supports `|` as part of token parameters. e.g. `GET [base]/Patient?identifier=http://acme.org/patient|2345`
* Search now supports using range prefixes for date parameters. e.g. `GET [base]/Patient?birthdate=ge2013-03-14`
* The capability statement returned by `/metadata` now includes the detail of all search parameters supported
* Add support for the standard FHIR search parameters. Each FHIR resource type defines its own set of search parameters. i.e the search parameters for Patient can be found [here](https://www.hl7.org/fhir/patient.html#search)
* Search requests using invalid search parameters now return an error instead of an empty result set
* `/metadata` route in API GW so requests for that route doesn't need to be Authenticated/Authorized
* Support for `fhir-works-on-aws-interface` version `4.0.0`
* Change `config` to support new interface. `auth.strategy.oauth` changed to `auth.strategy.oauthPolicy`
* `authorizationUrl` changed to `authorizationEndpoint`
* `tokenUrl` changed to `tokenEndpoint`
* Support for `fhir-works-on-aws-authz-rbac` version `4.0.0`
* Support for `fhir-works-on-aws-routing` version `3.0.0`
* Change non-inclusive terminology in serverless.yaml description
* Support ["System Level"](https://hl7.org/fhir/uv/bulkdata/export/index.html#endpoint---system-level-export) export of DB data

### Bug Fixes

* change output file type
* dependency vulnerability
* pin IG download
* Allow running sls offline with Hapi Validator
* typo for passing in custom log level
* **persistence:** `meta` field was missing from update response even though it was persisted properly
* **persistence:** Improve error logging when sync from ddb to ElasticSearch fails
* **search:** Token search params were matching additional documents
* Suppress deprecation warning when writing to Info_Output.yml during installation
* Fixed a bug where the `meta` field was being overwritten. This allows to properly store meta fields such as `meta.security`, `meta.profile`, etc.

## [2.1.3] - 2021-04-22
### Added
- fix: Use yarn as package manager and lock down serverless version

Expand Down
11,201 changes: 6,837 additions & 4,364 deletions NOTICE.txt

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Many integration use cases require the ability to connect to a FHIR interface on

The solution is deployed using a CloudFormation template which installs all necessary resources. For details on deploying the solution please see the details on the solution home page: [FHIR Works on AWS](https://aws.amazon.com/solutions/implementations/fhir-works-on-aws/).

To create a custom build of Service Workbench on AWS, see the [developer instructions](https://github.com/awslabs/fhir-works-on-aws-deployment).
To create a custom build of FHIR Works on AWS, see the [developer instructions](https://github.com/awslabs/fhir-works-on-aws-deployment).

***

Expand Down
110 changes: 107 additions & 3 deletions deployment/build-s3-dist.sh
Original file line number Diff line number Diff line change
Expand Up @@ -75,26 +75,114 @@ MAPPINGS_SECTION=$(printf $MAPPINGS_SECTION_FORMAT $BUCKET_NAME)
cat $TEMPLATE_PATH | jq --argjson mappings $MAPPINGS_SECTION '. + $mappings' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# Update template description
cat $TEMPLATE_PATH | jq ".Description = \"(SO0128) - $VERSION_CODE - Solution - Primary Template - This template creates all the necessary resources to deploy FHIR Works on AWS; a framework to deploy a FHIR server on AWS.
\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# Add API Gateway log settings
DEPLOYMENT_RESOURCE_NAME=$(cat $TEMPLATE_PATH | jq '.Resources | keys[] | select( . | startswith("ApiGatewayDeployment"))') # Name of the deployment resource is dynamic
DEV_STAGE='{
"Type": "AWS::ApiGateway::Stage",
"Properties": {
"StageName": "dev",
"Description": "dev Stage",
"RestApiId": {
"Ref": "ApiGatewayRestApi"
},
"DeploymentId": { "Ref":'$DEPLOYMENT_RESOURCE_NAME'},
"AccessLogSetting": {
"DestinationArn" : {
"Fn::GetAtt": [
"ApiGatewayLogGroup",
"Arn"
]
},
"Format" : "{\"authorizer.claims.sub\":\"$context.authorizer.claims.sub\",\"error.message\":\"$context.error.message\",\"extendedRequestId\":\"$context.extendedRequestId\",\"httpMethod\":\"$context.httpMethod\",\"identity.sourceIp\":\"$context.identity.sourceIp\",\"integration.error\":\"$context.integration.error\",\"integration.integrationStatus\":\"$context.integration.integrationStatus\",\"integration.latency\":\"$context.integration.latency\",\"integration.requestId\":\"$context.integration.requestId\",\"integration.status\":\"$context.integration.status\",\"path\":\"$context.path\",\"requestId\":\"$context.requestId\",\"responseLatency\":\"$context.responseLatency\",\"responseLength\":\"$context.responseLength\",\"stage\":\"$context.stage\",\"status\":\"$context.status\"}"
}
}
}'
cat $TEMPLATE_PATH | jq --argjson devstage "$DEV_STAGE" '.Resources = .Resources + {Dev: $devstage}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq '.Resources.ApiGatewayApiKey1.DependsOn = [.Resources.ApiGatewayApiKey1.DependsOn, "Dev"]' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq '.Resources.ApiGatewayUsagePlan.DependsOn = [.Resources.ApiGatewayUsagePlan.DependsOn, "Dev"]' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson deployment "$DEPLOYMENT_RESOURCE_NAME" 'del(.Resources[$deployment].Properties.StageName)' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson deployment "$DEPLOYMENT_RESOURCE_NAME" --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id": "W68","reason":"Usage plan is associated with stage name dev"},{"id":"W45", "reason":"Updated via custom resource after resource creation"}]}}' '.Resources[$deployment] = .Resources[$deployment] + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id": "W64","reason":"Usage plan is associated with stage name dev"}]}}' '.Resources.Dev = .Resources.Dev + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# Update bucket names
cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.FhirServerLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.DdbToEsLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.CustomDashresourceDashapigwDashcwDashroleLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH


cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.StartExportJobLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.StopExportJobLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.GetJobStatusLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.UpdateStatusLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Resources.UploadGlueScriptsLambdaFunction.Properties.Code.S3Bucket = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq --argjson mapping $S3_BUCKET_FIND_IN_MAP '.Outputs.ServerlessDeploymentBucketName.Value = $mapping' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# Update code keys
# Update code keys and custom_user_agent
cat $TEMPLATE_PATH | jq ".Resources.FhirServerLambdaFunction.Properties.Code.S3Key = \"$FHIR_SERVICE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.FhirServerLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq ".Resources.DdbToEsLambdaFunction.Properties.Code.S3Key = \"$FHIR_SERVICE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.DdbToEsLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq ".Resources.StartExportJobLambdaFunction.Properties.Code.S3Key = \"$FHIR_SERVICE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.StartExportJobLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq ".Resources.StopExportJobLambdaFunction.Properties.Code.S3Key = \"$FHIR_SERVICE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.StopExportJobLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq ".Resources.GetJobStatusLambdaFunction.Properties.Code.S3Key = \"$FHIR_SERVICE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.GetJobStatusLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq ".Resources.UpdateStatusLambdaFunction.Properties.Code.S3Key = \"$FHIR_SERVICE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.UpdateStatusLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq ".Resources.UploadGlueScriptsLambdaFunction.Properties.Code.S3Key = \"$FHIR_SERVICE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.UploadGlueScriptsLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq ".Resources.CustomDashresourceDashapigwDashcwDashroleLambdaFunction.Properties.Code.S3Key = \"$CUSTOM_RESOURCE_LAMBDA_CODE_PATH\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq ".Resources.CustomDashresourceDashapigwDashcwDashroleLambdaFunction.Properties.Environment.Variables.CUSTOM_USER_AGENT = \""AwsSolution/SO0128/$VERSION_CODE"\"" > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W59","reason":"FHIR specification allows for no auth on /metadata"}]}}' '.Resources.ApiGatewayMethodMetadataGet = .Resources.ApiGatewayMethodMetadataGet + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W59","reason":"FHIR specification allows for no auth on /tenant/{tenantId}/metadata"}]}}' '.Resources.ApiGatewayMethodTenantTenantidVarMetadataGet = .Resources.ApiGatewayMethodTenantTenantidVarMetadataGet + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id": "W28","reason":"API key name must be known before sls package is run"}]}}' '.Resources.ApiGatewayApiKey1 = .Resources.ApiGatewayApiKey1 + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

Expand All @@ -110,8 +198,24 @@ mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W89","reason":"We do not want a VPC for DdbToEsLambdaFunction. We are controlling access to the lambda using IAM roles"}, {"id":"W92","reason":"We do not want to define ReservedConcurrentExecutions since we want to allow this function to scale up"}]}}' '.Resources.DdbToEsLambdaFunction = .Resources.DdbToEsLambdaFunction + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

API_GATEWAY_DEPLOYMENT_RESOURCE=$(cat $TEMPLATE_PATH | jq '.Resources | keys[] | select( . | startswith("ApiGatewayDeployment"))')
cat $TEMPLATE_PATH | jq -r --argjson resource "$API_GATEWAY_DEPLOYMENT_RESOURCE" --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W45", "reason":"Updated via custom resource after resource creation"}]}}' '.Resources[$resource] = .Resources[$resource] + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
# StartExportJobLambdaFunction Nag exceptions
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W89","reason":"We do not want a VPC for StartExportJobLambdaFunction. We are controlling access to the lambda using IAM roles"}, {"id":"W92","reason":"We do not want to define ReservedConcurrentExecutions since we want to allow this function to scale up"}]}}' '.Resources.StartExportJobLambdaFunction = .Resources.StartExportJobLambdaFunction + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# StopExportJobLambdaFunction Nag exceptions
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W89","reason":"We do not want a VPC for StopExportJobLambdaFunction. We are controlling access to the lambda using IAM roles"}, {"id":"W92","reason":"We do not want to define ReservedConcurrentExecutions since we want to allow this function to scale up"}]}}' '.Resources.StopExportJobLambdaFunction = .Resources.StopExportJobLambdaFunction + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# GetJobStatusLambdaFunction Nag exceptions
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W89","reason":"We do not want a VPC for GetJobStatusLambdaFunction. We are controlling access to the lambda using IAM roles"}, {"id":"W92","reason":"We do not want to define ReservedConcurrentExecutions since we want to allow this function to scale up"}]}}' '.Resources.GetJobStatusLambdaFunction = .Resources.GetJobStatusLambdaFunction + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# UpdateStatusLambdaFunction Nag exceptions
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W89","reason":"We do not want a VPC for UpdateStatusLambdaFunction. We are controlling access to the lambda using IAM roles"}, {"id":"W92","reason":"We do not want to define ReservedConcurrentExecutions since we want to allow this function to scale up"}]}}' '.Resources.UpdateStatusLambdaFunction = .Resources.UpdateStatusLambdaFunction + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# UploadGlueScriptsLambdaFunction Nag exceptions
cat $TEMPLATE_PATH | jq --argjson metadata '{"cfn_nag":{"rules_to_suppress":[{"id":"W89","reason":"We do not want a VPC for UploadGlueScriptsLambdaFunction. We are controlling access to the lambda using IAM roles"}, {"id":"W92","reason":"We do not want to define ReservedConcurrentExecutions since we want to allow this function to scale up"}]}}' '.Resources.UploadGlueScriptsLambdaFunction = .Resources.UploadGlueScriptsLambdaFunction + {Metadata: $metadata}' > $TEMPLATE_PATH.tmp
mv $TEMPLATE_PATH.tmp $TEMPLATE_PATH

# CustomDashresourceDashapigwDashcwDashroleLambdaFunction requires permission to write CloudWatch Logs
Expand Down
Loading

0 comments on commit c9f2bc6

Please sign in to comment.