Note
The following documentation outlines how to deploy a new ArgoCD envrionment onto a management
cluster.
See Deploying a Child Cluster if you are deploying a cluster using an existing environment.
- Create an application credential for your new cluster on your project
- Ensure enough quota for the cluster (RAM, CPU, instances etc)
- Provision a floating ip on your project for kubernetes API Server access
- (Optional) Provision a second floating IP for nginx ingress controller
- Create a self-managed cluster called
*-management-cluster
- replace*
with the environment - eitherprod
dev
orstaging
- see https://stfc.atlassian.net/wiki/spaces/CLOUDKB/pages/211878034/Cluster+API+Setup.
- Ensure the name matches
*-management-cluster
(with * as eitherprod
,dev
orstaging
) as CAPI cannot rename a cluster.
❗ Make sure you already have a Kubernetes cluster deployed before proceeding with rest of the documentation |
---|
You may want to make your own environment so you can test gitOps config or charts idependently of running clusters on test/feature branches
We don't want too many environments on main
- especially those that aren't being used so there's less clutter
To deploy another environment - follow these steps:
-
Create a new branch for your work
-
Create a new folder
charts/<your-environment>
-
Copy the charts from another environment - from either
prod
,dev
, orstaging
. -
Create a new folder
clusters/<your-environment>
-
Copy
management
subfolder from another environment into your newly created folder inclusters
. You can optionally copy or create any other cluster subfolders you want to be in this environment here as well -
Edit
apps.yaml
:
-
Any entries that begin with
path
should use charts from the new environment -
Any entries that begin with
valuesFiles
should use paths from the new environment -
remember to change
spec.template.metadata.name
so the prefix matches your environment name -
(Optional) Any entries with
targetRevision
orrevision
should point to your new branch if you are using this branch for testing or as a feature branch and not intending on keeping it long-term
- Modify the
infra-values.yaml
and any other cluster-specific values files as required
- see infra-setup - "Pre-deployment" Steps
- Modify/add any cluster-specific values for any apps you want to manage.
- see app-setup - "Pre-deployment" Steps
-
Create a new folder
mkdir -p ./secrets/<your-environment>/management/infra
You will also need to create another subfolder for each extra cluster subfolder you've copied/added -
Add secret files
.sops.yaml
,api-sever-fip.yaml
, andapp-creds.yaml
as above See (Deploying a child cluster on an existing environment steps 5-7). Repeat for all other clusters you want to add
Caution
Make sure these files has been encrypted using SOPS in the steps above before committing and pushing changes to your branch.
-
(Optional) If this is not a temporary environment - and you want to keep it around long term in
main
make a PR for it and get it merged -
Deploy age private key secret to management cluster
New to generating age keys and generating secrets? See secrets for more information
cd scripts; ./deploy-helm-secret.sh <path-to-age-private-key>
- Run deploy.sh on your self-managed cluster
*-management-cluster
like so:
cd scripts; ./deploy.sh management <your-environment>
when deploying argo onto child clusters - replace management
with the cluster's folder name
e.g. to deploy worker cluster apps - run
cd scripts; ./deploy.sh worker <your-environment>
-
Wait for argocd to deploy and it should spring to life and spin up any other clusters you've defined
-
Perform any Post-Deployment steps - see infra-setup and app-setup for any apps you want to manage
-
Repeat step 12-14 for each extra cluster that you have running that you want to manage apps with.
You can get the kubeconfig to access these clusters by accessing the management
cluster and running clusterctl get kubeconfig $CLUSTER_NAME -n clusters > $CLUSTER_NAME.kubeconfig