Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: fix doc build #517

Merged
merged 2 commits into from
Nov 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions docs/api-types/target-group-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,23 +8,24 @@ health check configurations of those backend resources.

When attaching a policy to a resource, the following restrictions apply:

* A policy can be only attached to `Service` resources.
* The attached resource can only be `backendRef` of `HTTPRoute` and `GRPCRoute`.
* The attached resource should exist in the same namespace as the policy resource.
- A policy can be only attached to `Service` resources.
- The attached resource can only be `backendRef` of `HTTPRoute` and `GRPCRoute`.
- The attached resource should exist in the same namespace as the policy resource.

The policy will not take effect if:

* The resource does not exist
* The resource is not referenced by any route
* The resource is referenced by a route of unsupported type
- The resource does not exist
- The resource is not referenced by any route
- The resource is referenced by a route of unsupported type

These restrictions are not forced; for example, users may create a policy that targets a service that is not created yet.
However, the policy will not take effect unless the target is valid.

**Limitations and Considerations**
* Attaching TargetGroupPolicy to a resource that is already referenced by a route will result in a replacement
of VPC Lattice TargetGroup resource, except for health check updates.
* Removing TargetGroupPolicy of a resource will roll back protocol configuration to default setting. (HTTP1/HTTP plaintext)

- Attaching TargetGroupPolicy to a resource that is already referenced by a route will result in a replacement
of VPC Lattice TargetGroup resource, except for health check updates.
- Removing TargetGroupPolicy of a resource will roll back protocol configuration to default setting. (HTTP1/HTTP plaintext)

## Example Configuration

Expand Down
3 changes: 2 additions & 1 deletion docs/concepts/index.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Configure AWS Gateway API Controller

Refer to this section to further configure your use of the AWS Gateway API Controller.
The features here build on the examples shown in [Get Started Using the AWS Gateway API Controller](../getstarted.md).
The features here build on the examples shown in [Get Started Using the AWS Gateway API Controller](../guides/getstarted.md).
36 changes: 18 additions & 18 deletions docs/concepts/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ For medium and large-scale customers, applications can often spread across multi
For example, information pertaining to a company’s authentication, billing, and inventory may each be served by services running on different VPCs in AWS.
Someone wanting to run an application that is spread out in this way might find themselves having to work with multiple ways to configure:

* Authentication and authorization
* Observability
* Service discovery
* Network connectivity and traffic routing
- Authentication and authorization
- Observability
- Service discovery
- Network connectivity and traffic routing

This is not a new problem.
A common approach to interconnecting services that span multiple VPCs is to use service meshes. But these require sidecars, which can introduce scaling problems and present their own management challenges, such as dealing with control plane and data plane at scale.
Expand All @@ -20,38 +20,38 @@ The goal of VPC Lattice is to provide a way to have a single, over-arching servi
You should also have consistent ways of working with assets across your VPCs, even if those assets include different combinations of instances, clusters, containers, and serverless.
The components making up that view include:

* Service Directory: This is an account-level directory for gathering your services in once place.
This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you.
A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80).
However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter.
- Service Directory: This is an account-level directory for gathering your services in once place.
This can provide a view from the VPC Lattice section of the AWS console into all the services you own, as well as services that are shared with you.
A service might direct traffic to a particular service type (such as HTTP) and port (such as port 80).
However, using different rules, a request for the service could be sent to different targets such as a Kubernetes pod or a Lambda function, based on path or query string parameter.

* Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items.
- Service Network: Because applications might span multiple VPCs and accounts, there is a need to create networks that span those items.
These networks let you register services to run across accounts and VPCs.
You can create common authorization rules to simplify connectivity.

* Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway.
- Service Policies: You can build service policies to configure observability, access, and traffic management across any service network or gateway.
You configure rules for handling traffic and for authorizing access.
For now, you can assign IAM roles to allow certain requests.
These are similar to S3 or IAM resource policies.
Overall, this provides a common way to apply access rules at the service or service network levels.

* Service Gateway: This feature is not yet implemented.
- Service Gateway: This feature is not yet implemented.
It is meant to centralize management of ingress and egress gateways.
The Service Gateway will also let you manage access to external dependencies and clients using a centrally managed VPC.

If all goes well, you should be able to achieve some of the following goals:

* Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs.
- Kubernetes multi-cluster connectivity: Say that you have multiple clusters across multiple VPCs.
After configuring your services with the Kubernetes Gateway API, you can facilitate communications between services on those clusters without dealing with the underlying infrastructure.
VPC Lattice handles a lot of the details for you without needing things like sidecars.
* Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features.
- Serverless access: VPC Lattice allows access to serverless features, as well as Kubernetes cluster features.
This gives you a way to have a consistent interface to multiple types of platforms.

With VPC Lattice you can also avoid some of these common problems:

* Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together.
- Overlapping IP addresses: Even with well-managed IP addresses, overlapping address use can occur by mistake or when organizations or companies merge together.
IP address conflicts can also occur if you wanted to manage resources across multiple Kubernetes clusters.
* Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted.
- Sidecar management: Changes to sidecars might require those sidecars to be reconfigured or rebooted.
While this might not be a big issue for a handful of sidecars, it can be disruptive if you have thousands of pods, each with its own sidecar.

## Relationship between VPC Lattice and Kubernetes
Expand All @@ -64,8 +64,8 @@ The following figure illustrates how VPC Lattice objects connect to [Kubernetes
As shown in the figure, there are different personas associated with different levels of control in VPC Lattice.
Notice that the Kubernetes Gateway API syntax is used to create the gateway, HTTPRoute and services, but Kubernetes gets the details of those items from VPC Lattice:

* Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass.
* Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies.
* Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case.
- Infrastructure provider: Creates the Kubernetes GatewayClass to identify VPC Lattice as the GatewayClass.
- Cluster operator: Creates the Kubernetes Gateway, which gets information from VPC Lattice related to the Service Gateway and Service Networks, as well as their related Service Policies.
- Application developer: Creates HTTPRoute objects that point to Kubernetes services, which in turn are directed to particular pods, in this case.
This is all done by checking the related VPC Lattice Services (and related policies), Target Groups, and Targets
Keep in mind that Target Groups v1 and v2 can be on different clusters in different VPCs.
10 changes: 9 additions & 1 deletion docs/guides/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
```bash
eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION
```

1. Configure security group to receive traffic from the VPC Lattice network. You must set up security groups so that they allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. See [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) for details. Lattice has both an IPv4 and IPv6 prefix lists available.
```bash
CLUSTER_SG=$(aws eks describe-cluster --name $CLUSTER_NAME --output json| jq -r '.cluster.resourcesVpcConfig.clusterSecurityGroupId')
Expand All @@ -33,6 +34,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve --region $AWS_REGION
```
1. Create a policy (`recommended-inline-policy.json`) in IAM with the following content that can invoke the gateway API and copy the policy arn for later use:

```bash
{
"Version": "2012-10-17",
Expand Down Expand Up @@ -62,6 +64,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
--policy-name VPCLatticeControllerIAMPolicy \
--policy-document file://examples/recommended-inline-policy.json
```

1. Create the `aws-application-networking-system` namespace:
```bash
kubectl apply -f examples/deploy-namesystem.yaml
Expand All @@ -71,6 +74,7 @@ EKS is a simple, recommended way of preparing a cluster for running services wit
export VPCLatticeControllerIAMPolicyArn=$(aws iam list-policies --query 'Policies[?PolicyName==`VPCLatticeControllerIAMPolicy`].Arn' --output text)
```
1. Create an iamserviceaccount for pod level permission:

```bash
eksctl create iamserviceaccount \
--cluster=$CLUSTER_NAME \
Expand Down Expand Up @@ -128,10 +132,13 @@ Alternatively, you can manually provide configuration variables when installing
## Controller Installation

1. Run either `kubectl` or `helm` to deploy the controller. Check [Environment Variables](../concepts/environment.md) for detailed explanation of each configuration option.

```bash
kubectl apply -f examples/deploy-v0.0.18.yaml
```

or

```bash
# login to ECR
aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws
Expand All @@ -153,4 +160,5 @@ Alternatively, you can manually provide configuration variables when installing
```bash
kubectl apply -f examples/gatewayclass.yaml
```
1. You are all set! Check our [Getting Started Guide](getstarted.md) to try setting up service-to-service communication.
1. You are all set! Check our [Getting Started Guide](getstarted.md) to try setting up service-to-service communication.

88 changes: 51 additions & 37 deletions docs/guides/getstarted.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ Both clusters are created using `eksctl`, with both clusters created from the sa

Using these examples as a foundation, see the [Configuration](../concepts/index.md) section for ways to further configure service-to-service communications.


**NOTE**: You can get the yaml files used on this page by cloning the [AWS Gateway API Controller](https://github.com/aws/aws-application-networking-k8s) repository.

## Set up single-cluster/VPC service-to-service communications
Expand All @@ -25,12 +24,14 @@ This example creates a single cluster in a single VPC, then configures two route
When `DEFAULT_SERVICE_NETWORK` environment variable is specified, the controller will automatically configure a service network for you.
For example:
```bash

helm upgrade gateway-api-controller \
oci://281979210680.dkr.ecr.us-west-2.amazonaws.com/aws-gateway-controller-chart \
--reuse-values \
--set=defaultServiceNetwork=my-hotel
```
Alternatively, you can use AWS CLI to manually create a VPC Lattice service network, with the name `my-hotel`:

```bash
aws vpc-lattice create-service-network --name my-hotel # grab service network ID
aws vpc-lattice create-service-network-vpc-association --service-network-identifier <service-network-id> --vpc-identifier <k8s-cluster-vpc-id>
Expand All @@ -48,14 +49,18 @@ This example creates a single cluster in a single VPC, then configures two route
]
}
```

1. Create the Kubernetes Gateway `my-hotel`:

```bash
kubectl apply -f examples/my-hotel-gateway.yaml
```

Verify that `my-hotel` Gateway is created with `PROGRAMMED` status equals to `True`:

```bash
kubectl get gateway
kubectl get gateway

NAME CLASS ADDRESS PROGRAMMED AGE
my-hotel amazon-vpc-lattice True 7d12h
```
Expand All @@ -72,37 +77,40 @@ This example creates a single cluster in a single VPC, then configures two route
kubectl apply -f examples/inventory-route.yaml
```
1. Find out HTTPRoute's DNS name from HTTPRoute status:

```bash
kubectl get httproute

NAME HOSTNAMES AGE
inventory 51s
rates 6m11s
```

1. Check VPC Lattice generated DNS address for HTTPRoute `inventory` and `rates` :
```bash
kubectl get httproute inventory -o yaml

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
annotations:
application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-02fb06f1acdeb5b55.7d67968.vpc-lattice-svcs.us-west-2.on.aws
...
```

```bash
kubectl get httproute rates -o yaml

apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
annotations:
application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-0d38139624f20d213.7d67968.vpc-lattice-svcs.us-west-2.on.aws
...
```

```bash
kubectl get httproute inventory -o yaml

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
annotations:
application-networking.k8s.aws/lattice-assigned-domain-name: inventory-default-02fb06f1acdeb5b55.7d67968.vpc-lattice-svcs.us-west-2.on.aws
...
```

```bash
kubectl get httproute rates -o yaml

apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
annotations:
application-networking.k8s.aws/lattice-assigned-domain-name: rates-default-0d38139624f20d213.7d67968.vpc-lattice-svcs.us-west-2.on.aws
...
```

1. If the previous step returns the expected response, store VPC Lattice assigned DNS names to variables.

Expand All @@ -112,18 +120,20 @@ This example creates a single cluster in a single VPC, then configures two route
```

Confirm that the URLs are stored correctly:

```bash
echo $ratesFQDN $inventoryFQDN
rates-default-034e0056410499722.7d67968.vpc-lattice-svcs.us-west-2.on.aws inventory-default-0c54a5e5a426f92c2.7d67968.vpc-lattice-svcs.us-west-2.on.aws
```

#### Verify service-to-service communications

1. Check connectivity from the `inventory-ver1` service to `parking` and `review` services:
1. Check connectivity from the `inventory-ver1` service to `parking` and `review` services:

```bash
kubectl exec deploy/inventory-ver1 -- curl $ratesFQDN/parking $ratesFQDN/review
```

```
Requsting to Pod(parking-8548d7f98d-57whb): parking handler pod
Requsting to Pod(review-6df847686d-dhzwc): review handler pod
Expand All @@ -136,29 +146,29 @@ This example creates a single cluster in a single VPC, then configures two route
```
Requsting to Pod(inventory-ver1-99d48958c-whr2q): Inventory-ver1 handler pod
```
Now you could confirm the service-to-service communications within one cluster is working as expected.
Now you could confirm the service-to-service communications within one cluster is working as expected.

## Set up multi-cluster/multi-VPC service-to-service communications

This sections builds on the previous section by migrating a Kubernetes service (HTTPRoute inventory) from one Kubernetes cluster to a different Kubernetes cluster.
For example, it will:

* Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC.
* Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster.
- Migrate the Kubernetes inventory service from a Kubernetes v1.21 cluster to a Kubernetes v1.23 cluster in a different VPC.
- Scale up the Kubernetes inventory service to run it in another cluster (and another VPC) in addition to the current cluster.

The following figure illustrates this:

![Multiple clusters/VPCs service-to-service communications](../images/example2.png)

### Steps

**Set up `inventory-ver2` service and serviceExport in the second cluster**
**Set up `inventory-ver2` service and serviceExport in the second cluster**

1. Create a second Kubernetes cluster `cluster2` (using the same instructions used to create the first).

1. Ensure you're using the second cluster's `kubectl` context.
1. Ensure you're using the second cluster's `kubectl` context.
```bash
kubectl config get-contexts
kubectl config get-contexts
```
If your context is set to the first cluster, switch it to use the second cluster one:
```bash
Expand All @@ -169,10 +179,11 @@ The following figure illustrates this:
kubectl apply -f examples/inventory-ver2.yaml
```
1. Export this Kubernetes inventory-ver2 from the second cluster, so that it can be referenced by HTTPRoute in the first cluster:

```bash
kubectl apply -f examples/inventory-ver2-export.yaml
```

**Switch back to the first cluster**

1. Switch context back to the first cluster
Expand All @@ -188,9 +199,10 @@ The following figure illustrates this:
kubectl apply -f examples/inventory-route-bluegreen.yaml
```
1. Check the service-to-service connectivity from `parking`(in cluster1) to `inventory-ver1`(in cluster1) and `inventory-ver2`(in cluster2):

```bash
kubectl exec deploy/parking -- sh -c 'for ((i=1; i<=30; i++)); do curl "$0"; done' "$inventoryFQDN"

Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod <----> in 2nd cluster
Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod
Requsting to Pod(inventory-ver2-6dc74b45d8-rlnlt): Inventory-ver2 handler pod
Expand All @@ -201,4 +213,6 @@ The following figure illustrates this:
Requsting to Pod(inventory-ver2-6dc74b45d8-95rsr): Inventory-ver2 handler pod
Requsting to Pod(inventory-ver1-74fc59977-wg8br): Inventory-ver1 handler pod....
```

You can see that the traffic is distributed between *inventory-ver1* and *inventory-ver2* as expected.

Loading