Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README.md #1

Open
wants to merge 26 commits into
base: final
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
514f590
Update README.md
dpstart Dec 13, 2019
1ff1094
Update custom_hpa.yaml
dpstart Dec 13, 2019
e4f5060
Delete cmd.txt
dpstart Dec 13, 2019
24a84fd
Update README.md
dpstart Dec 13, 2019
6f27b0d
Delete .DS_Store
dpstart Jan 16, 2020
fd5da23
Update README.md
dpstart Jan 16, 2020
9d1cbb9
Update servicemonitor.yaml
dpstart Jan 16, 2020
5629537
Update README.md
dpstart Jan 16, 2020
06fd463
Improve README
dpstart Jan 22, 2020
f8d9cae
Comment YAML files
dpstart Jan 22, 2020
73f5ce9
Minor improvements to docs
dpstart Jan 22, 2020
d423cc7
Clean up repo + update gitignore
dpstart Feb 10, 2020
f70602c
Merge branch 'master' of https://github.com/netgroup-polito/VPNaaS
dpstart Feb 10, 2020
6f92c27
Add initContainer to set ip forwarding in pod
dpstart Feb 10, 2020
ab532bb
Change default ports, avoid default tls/https port
dpstart Feb 10, 2020
f44a8d1
Improve documentation
dpstart Feb 11, 2020
ce96207
Improve documentation + add architectural scheme
dpstart Feb 11, 2020
0b48b4f
Added more detailed info aout certificate
dpstart Feb 11, 2020
f7e50ee
Add loadbalancer info
dpstart Feb 11, 2020
36ffb80
Improve documentation + add architectural scheme
dpstart Feb 11, 2020
8a08616
Added more subsection + general view on installation
dpstart Feb 11, 2020
533c221
Add compatibility with helm v3
dpstart Feb 14, 2020
1700b6a
Update Notes to add -c option when running commands in container.
dpstart Feb 14, 2020
cff6d93
Add note about setting ns in service monitor
dpstart Feb 14, 2020
e10ba30
Add Adapter paragraph
dpstart Feb 14, 2020
6b706bd
VPNaaS final presentation
frisso Mar 27, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.DS_Store
openvpn-chart/.DS_Store
*.ovpn
cmd.txt
servicemonitor_dcota.yaml
openvpn-chart/.DS_Store/
169 changes: 166 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,168 @@
# VPNaaS
# HPA with Custom Metrics: the VPN-as-a-service use case.

## HPA with Custom Metrics: the VPN-as-a-service use case.
Provision an OpenVPN installation on k8s that can autoscale against custom metrics.

Provision an OpenVPN installation on k8s that can autoscale against custom metric.
## Architecture

This project contains a full OpenVPN deployment for k8s, which is coupled with an OpenVPN metrics exporter and exposed through a LoadBalancer service.

The exporters harvests metrics from the OpenVPN instance and exposes them for Prometheus (note that the an instance of the [Prometheus Operator](https://github.com/coreos/prometheus-operator) needs to be running on the cluster).

These metrics are then fed to the [Prometheus Adapter](https://github.com/helm/charts/tree/master/stable/prometheus-adapter), which implements the k8s [Custom Metrics API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis). The Adapter is reponsible for exposing the metrics through the k8s API, so that they can be queried by an HPA instance for autoscaling.

An high-level view of the components and their interactions is showed in the picture.

![](img/scheme.png)


## Prerequisites

Everything was tested with:

* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) version v1.12.0+.
* Kubernetes v1.6+ cluster.
* [helm](https://helm.sh/docs/intro/install/) v2.16+ and v3.

## Installation

We will first focus on provisioning the OpenVPN installation on top of Kubernetes. Once this is done, we will add the components that allow us to expose the metrics through Prometheus.
As we've seen, these metrics are then processed by the adapter and exposed through the k8s metrics API.

After that, we can deploy HPA instances that autoscale against these new metrics.

### OpenVPN

The Helm OpenVPN chart is derived from the [official one](https://github.com/helm/charts/tree/master/stable/openvpn). This fork includes new shared volumes that are used to share OpenVPN metrics, and a sidecar container that exports these metrics for Prometheus.

To install from the chart directory, run
```helm install --name <release_name> --tiller-namespace <tiller_namespace> .```

As an example, to install the chart in the `johndoe` namespace, you might do
```helm install --name openvpn_v01 --tiller-namespace johndoe .```


The metrics exporter, which is taken from [this project](https://github.com/kumina/openvpn_exporter), is deployed as a sidecar container in the OpenVPN pod, and it exposes metrics on port 9176. This is shown in the following code snippet, where the exporter image is used, and the commands for exporting the metrics are run.

```YAML
...

containers:
- name: exporter
image: kumina/openvpn-exporter
command: ["/bin/openvpn_exporter"]
args: ["-openvpn.status_paths", "/etc/openvpn-exporter/openvpn/openvpn-status.log"]
volumeMounts:
- name: openvpn-status
mountPath: /etc/openvpn-exporter/openvpn

...
```

Docs for the exporter are available [here](https://github.com/kumina/openvpn_exporter).

This chart also contains some additional modifications:
* `status-version 2` option is added in the OpenVPN configuration file for compatibility with the exporter.
* An option to set ip forwarding in the container is added. If the option is set, the deployment spawns an initContainer in privileged mode that runs the proper commands on initialization.
* The default ports are changed, to avoid using port 443 which is often already in use. All of these options can be easily changed from the [values.yaml](https://github.com/netgroup-polito/VPNaaS/blob/master/openvpn-chart/values.yaml) configuration file.



After the chart is deployed and the pod is ready, an OpenVPN certificate for a new user can be generated.
The certificate will allow a user to connect to the VPN using any OpenVPN client available. The certificate contains information such as client configuration options, the IP of the VPN gateway and the client's private key and X.509 certificates.

Certificates can be generated using the following commands:

```bash
POD_NAME=$(kubectl get pods --namespace <namespace> -l "app=openvpn,release=<your_release>" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_NAME=$(kubectl get svc --namespace <namespace> -l "app=openvpn,release=<your_release>" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_IP=$(kubectl get svc --namespace <namespace> "$SERVICE_NAME" -o go-template='{{ range $k, $v := (index .status.loadBalancer.ingress 0)}}{{ $v }}{{end}}')
KEY_NAME=<key_name>
kubectl --namespace <namespace> exec -it "$POD_NAME" -c openvpn /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace <namespace> exec -it "$POD_NAME" -c openvpn cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"
```

Here, the `KEY_NAME` option should be a unique identifier of the VPN user, such as an email or university ID numer. This value is going to appear in the *Subject* value of the client certificate, and can be used to revoke it.


Clients certificates can be revoked in this manner:

```bash
KEY_NAME=<key_name>
POD_NAME=$(kubectl get pods -n <namespace> -l "app=openvpn,release=<your_release>" -o jsonpath='{.items[0].metadata.name}')
kubectl -n <namespace> exec -it "$POD_NAME" /etc/openvpn/setup/revokeClientCert.sh $KEY_NAME
```

To take a look at the metrics, you can use port-forwarding.

Run `kubectl port-forward <pod_name> 9176:9176` and then connect to [http://localhost:9176/metrics](http://localhost:9176/metrics).

You should now be able to see some Prometheus metrics of your OpenVPN instance:

```
# HELP openvpn_openvpn_server_connected_clients Number Of Connected Clients
# TYPE openvpn_openvpn_server_connected_clients gauge
openvpn_openvpn_server_connected_clients{status_path="/etc/openvpn-exporter/openvpn/openvpn-status.log"} 1
# HELP openvpn_server_client_received_bytes_total Amount of data received over a connection on the VPN server, in bytes.
# TYPE openvpn_server_client_received_bytes_total counter
openvpn_server_client_received_bytes_total{common_name="CC2",connection_time="1576248156",real_address="10.244.0.0:25878",status_path="/etc/openvpn-exporter/openvpn/openvpn-status.log",username="UNDEF",virtual_address="10.240.0.6"} 17762
# HELP openvpn_server_client_sent_bytes_total Amount of data sent over a connection on the VPN server, in bytes.
# TYPE openvpn_server_client_sent_bytes_total counter
openvpn_server_client_sent_bytes_total{common_name="CC2",connection_time="1576248156",real_address="10.244.0.0:25878",status_path="/etc/openvpn-exporter/openvpn/openvpn-status.log",username="UNDEF",virtual_address="10.240.0.6"} 19047
```

At this point, you should have a working OpenVPN installation that runs on Kubernetes. The following steps will allow you to expose metrics through the [Custom Metrics API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis), that allows us to autoscale against OpenVPN metrics.

### Prometheus Adapter

The Prometheus Adapter has to be installed in the cluster in order to implement the Custom Metrics API using Prometheus data. The adapter, along with installation instructions and walkthroughs, can be found [here](https://github.com/DirectXMan12/k8s-prometheus-adapter).

### Prometheus Service Monitor

We first need to expose the exporter through a service, so that the Prometheus operator can access it, by running `kubectl apply -f exporter_service.yaml`. This is a very simple service that sits in front of our OpenVPN pods, and that defines a port through which we expose the metrics.

Running `kubectl apply -f servicemonitor.yaml` will now deploy the service monitor that is used by Prometheus to harvest our metrics. Remember to set the appropriate namespace in [servicemonitor.yaml](https://github.com/netgroup-polito/VPNaaS/blob/master/servicemonitor.yaml).
A service monitor is a Prometheus operator custom resource which declaratevly specifies how groups of services should be monitored.

### HPA

Once everything is up and running, we are now ready to autoscale against our custom metrics.
The following YAML snippet shows a HPA that scales against the number of users currently connected to the VPN:

```YAML
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: openvpn
spec:
scaleTargetRef:
# point the HPA at the sample application
# you created above
apiVersion: apps/v1
kind: Deployment
name: <your_openvpn_deployment>
# autoscale between 1 and 10 replicas
minReplicas: 1
maxReplicas: 10
metrics:
# use a "Pods" metric, which takes the average of the
# given metric across all pods controlled by the autoscaling target
- type: Pods
pods:
metricName: openvpn_openvpn_server_connected_clients
targetAverageValue: 3
```

Where `<your_openvpn_deployment>` should be replaced with the name of your OpenVPN deployment.

## Troubleshooting


### Internet traffic through VPN

You can avoid routing all traffic through the VPN by setting `redirectGateway: false`. The `redirect-gateway`option changes client routing table so that all traffic is directed through the server.

For a detailed discussion on OpenVPN routing, you can look at [this guide](https://community.openvpn.net/openvpn/wiki/BridgingAndRouting?__cf_chl_jschl_tk__=3594c84025c56b4a1b5b5ab4b8a09795f5dffde6-1581421660-0-AWvPPmNOQbCMn6yvKYVynFeagfHjTv3MIRLp1RjRbUpBry5iiU97HnZR4XUZTwIb9wczHJmkjrf-aOHY2xoQDUzYBNgqAiBLSqmZppVcqXBw1zpDYhOxMbk0MHbaqQLJluu0WEE-bEzUMWipoXMEpx5EbHQg_Xm3rbZLhvL3Dy5pF7_LvCPiAHoKdC1g0_T_-YjqVn858go5QQoXJghBLjcSIrNYydpljPUkil5rejI3vt0jp6VdrsXqHLVtAXWADDP8VlnYB0n0VyfdntSp9incx5-440aU7WAjHCOFLmc1eQcx7MiSTDwtr9FcJnxAZw)

## TODO

* Manage certificate persistence across replicas.
Binary file added VPNaaS-final-presentation.pptx
Binary file not shown.
39 changes: 0 additions & 39 deletions cmd.txt

This file was deleted.

7 changes: 3 additions & 4 deletions custom_hpa.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# An HPA instance that works on the openvpn deployment and scales against custom OpenVPN metrics.
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
Expand All @@ -8,7 +9,7 @@ spec:
# you created above
apiVersion: apps/v1
kind: Deployment
name: erstwhile-panda-openvpn
name: <deployment_name>
# autoscale between 1 and 10 replicas
minReplicas: 1
maxReplicas: 10
Expand All @@ -18,6 +19,4 @@ spec:
- type: Pods
pods:
metricName: openvpn_openvpn_server_connected_clients
# target 500 milli-requests per second,
# which is 1 request every two seconds
targetAverageValue: 3
targetAverageValue: 3
2 changes: 2 additions & 0 deletions exporter_service.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# A simple service that target the openvpn pod, which is selected by
# the service montitor for harvesting the metrics.
apiVersion: v1
kind: Service
metadata:
Expand Down
Binary file added img/scheme.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed openvpn-chart/.DS_Store
Binary file not shown.
6 changes: 3 additions & 3 deletions openvpn-chart/templates/NOTES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ Once the external IP is available and all the server certificates are generated
SERVICE_NAME=$(kubectl get svc --namespace "{{ .Release.Namespace }}" -l "app={{ template "openvpn.name" . }},release={{ .Release.Name }}" -o jsonpath='{ .items[0].metadata.name }')
SERVICE_IP=$(kubectl get svc --namespace "{{ .Release.Namespace }}" "$SERVICE_NAME" {{"-o go-template='{{ range $k, $v := (index .status.loadBalancer.ingress 0)}}{{ $v }}{{end}}'"}})
KEY_NAME=kubeVPN
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" -c openvpn /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP"
kubectl --namespace "{{ .Release.Namespace }}" exec -it "$POD_NAME" -c openvpn cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn"

Revoking certificates works just as easy:
KEY_NAME=<name>
POD_NAME=$(kubectl get pods -n "{{ .Release.Namespace }}" -l "app=openvpn,release={{ .Release.Name }}" -o jsonpath='{.items[0].metadata.name}')
kubectl -n "{{ .Release.Namespace }}" exec -it "$POD_NAME" /etc/openvpn/setup/revokeClientCert.sh $KEY_NAME
kubectl -n "{{ .Release.Namespace }}" exec -it "$POD_NAME" -c openvpn /etc/openvpn/setup/revokeClientCert.sh $KEY_NAME

Copy the resulting $KEY_NAME.ovpn file to your open vpn client (ex: in tunnelblick, just double click on the file). Do this for each user that needs to connect to the VPN. Change KEY_NAME for each additional user.
2 changes: 2 additions & 0 deletions openvpn-chart/templates/config-openvpn.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# ConfigMap for the OpenVPN deployment.
# It contains the certificate scripts and the openvpn configuration scripts and files.
apiVersion: v1
kind: ConfigMap
metadata:
Expand Down
18 changes: 18 additions & 0 deletions openvpn-chart/templates/openvpn-deployment.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Main OpenVPN Deployment with a sidecar container for exporting the metrics.
apiVersion: apps/v1
kind: Deployment
metadata:
Expand Down Expand Up @@ -28,6 +29,23 @@ spec:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.ipForwardInitContainer }}
initContainers:
- args:
- -c
- sysctl -w net.ipv4.ip_forward=1
command:
- /bin/sh
image: busybox:1.29
imagePullPolicy: IfNotPresent
name: sysctl
resources:
requests:
cpu: 5m
memory: 1Mi
securityContext:
privileged: true
{{- end }}
containers:
- name: exporter
image: kumina/openvpn-exporter
Expand Down
1 change: 1 addition & 0 deletions openvpn-chart/templates/openvpn-service.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Main service that sits in from of the OpenVPN instance.
apiVersion: v1
kind: Service
metadata:
Expand Down
8 changes: 6 additions & 2 deletions openvpn-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ image:
pullPolicy: IfNotPresent
service:
type: LoadBalancer
externalPort: 443
internalPort: 443
externalPort: 9914
internalPort: 9914
# hostPort: 443
externalIPs: []
nodePort: 32085
Expand All @@ -32,6 +32,10 @@ service:
# podAnnotations:
# backup.ark.heptio.com/backup-volumes: certs
podAnnotations: {}

# Add privileged init container to enable IPv4 forwarding
ipForwardInitContainer: true

resources:
limits:
cpu: 300m
Expand Down
3 changes: 2 additions & 1 deletion servicemonitor.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# A Prometheus operator service monitor, which describes the set of targets to be monitored by Prometheus.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: openvpn
namespace: dpallotta-ns1
namespace: default
spec:
endpoints:
- interval: 15s
Expand Down