Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(chart): initial helm chart for the operator (without the collector) #11

Merged
merged 6 commits into from
Jun 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .github/workflows/verify.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,11 @@ jobs:
run: |
make lint

- name: install Helm unittest plugin
shell: bash
run: |
helm plugin install https://github.com/helm-unittest/helm-unittest.git

- name: test
run: |
make test
Expand Down
28 changes: 20 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ Contributing
- Docker
- kubectl version v1.11.3+.
- Access to a Kubernetes v1.11.3+ cluster.
- [helm](https://helm.sh/docs/intro/install/)
- [helm unittest plug-in](https://github.com/helm-unittest/helm-unittest/tree/main)

## Deploying to a Local Cluster for Testing Purposes

Expand Down Expand Up @@ -54,31 +56,42 @@ In this case your test cluster needs to be configured to have pull access for th

After that, you can deploy the operator to your cluster:

* Install the CRDs into the cluster: `make install`
* Deploy the locally built image `dash0-operator-controller:latest` to the cluster: `make deploy`
* Alternatively, deploy the image pushed to the remote registry with the image specified by `IMG`: `make deploy IMG=<some-registry>/dash0-operator:tag`
* Deploy the locally built image `dash0-operator-controller:latest` to the cluster: `make deploy-via-helm`
(or `make deploy-via-kustomize`)
* Alternatively, deploy the image pushed to the remote registry with the image specified by `IMG`:
`make deploy-via-helm IMG=<some-registry>/dash0-operator:tag`
(or `make deploy-via-kustomize IMG=<some-registry>/dash0-operator:tag`)
* No matter if you deploy via helm or kustomize, the custom resource definition will automatically be installed when
deploying the operator. However, you can also do that separately via kustomize if required via `make install`.

**NOTE**: If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or be logged in as
admin.

**Delete the APIs(CRDs) from the cluster:**
**Undeploy the controller from the cluster:**

```sh
make uninstall
make undeploy-via-helm
```

**UnDeploy the controller from the cluster:**
or

```sh
make undeploy
make undeploy-via-kustomize
```

When undeploying the controllor, the same tool (helm vs. kustomiz) should be used as when deploying it.

This will also remove the custom resource definition. However, the custom resource definition can also be removed
separately via `make uninstall` without removing the operator.

## Run Tests

```
make test
```

This will run the go unit tests as well as the helm chart tests.

### End-to-End Tests

The end-to-end tests currently only support Kubernetes via Docker Desktop on Mac.
Expand All @@ -96,4 +109,3 @@ More information can be found via the [Kubebuilder Documentation](https://book.k
## Contributing

No contribution guidelines are available at this point.

25 changes: 19 additions & 6 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,11 @@ endif
# This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit.
OPERATOR_SDK_VERSION ?= v1.34.1

# Image URL to use all building/pushing image targets
IMG ?= dash0-operator-controller:latest
# image repository and tag to use for building/pushing the operator image
IMG_REPOSITORY ?= dash0-operator-controller
IMG_TAG ?= latest
IMG ?= $(IMG_REPOSITORY):$(IMG_TAG)

# ENVTEST_K8S_VERSION refers to the version of kubebuilder assets to be downloaded by envtest binary.
ENVTEST_K8S_VERSION = 1.28.3

Expand Down Expand Up @@ -114,6 +117,7 @@ vet: ## Run go vet against code.
.PHONY: test
test: manifests generate fmt vet envtest ## Run tests.
KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) --bin-dir $(LOCALBIN) -p path)" go test $$(go list ./... | grep -v /e2e) -coverprofile cover.out
cd helm-chart/dash0-operator && helm unittest -f 'tests/**/*.yaml' .

# Invoking ginkgo via go run makes sure we use the version from go.mod and not a version installed globally, which
# would be used when simply running `ginkgo -v test/e2e`. An alternative would be to invoke ginkgo via go test, that
Expand Down Expand Up @@ -193,15 +197,24 @@ uninstall: manifests kustomize ## Uninstall CRDs from the K8s cluster specified
sleep 1
$(KUSTOMIZE) build config/crd | $(KUBECTL) patch CustomResourceDefinition dash0s.operator.dash0.com -p '{"metadata":{"finalizers":null}}' --type=merge

.PHONY: deploy
deploy: manifests kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
.PHONY: deploy-via-kustomize
deploy-via-kustomize: manifests kustomize ## Deploy the controller via kustomize to the K8s cluster specified in ~/.kube/config.
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
$(KUSTOMIZE) build config/default | $(KUBECTL) apply -f -

.PHONY: undeploy
undeploy: ## Undeploy controller from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
.PHONY: undeploy-via-kustomize
undeploy-via-kustomize: ## Undeploy the controller via kustomize from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
$(KUSTOMIZE) build config/default | $(KUBECTL) delete --ignore-not-found=$(ignore-not-found) --wait=false -f -

.PHONY: deploy-via-helm
deploy-via-helm: ## Deploy the controller via helm to the K8s cluster specified in ~/.kube/config.
helm install --namespace dash0-operator-system --create-namespace --set operator.image=${IMG} --set operator.imagePullPolicy=Never dash0-operator helm-chart/dash0-operator

.PHONY: undeploy-via-helm
undeploy-via-helm: ## Undeploy the controller via helm from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
helm uninstall --namespace dash0-operator-system dash0-operator
$(KUBECTL) delete ns dash0-operator-system

##@ Build Dependencies

## Location to install dependencies to
Expand Down
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# Dash0 Kubernetes Operator

The Dash0 Kubernetes Operator makes observability easy for every Kubernetes setup.
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

The Dash0 Kubernetes Operator makes observability for Kubernetes _easy_.
Simply install the operator into your cluster to get OpenTelemetry data flowing from your applications and
infrastructure to Dash0.

## Description

The Dash0 Kubernetes operator installs an OpenTelemetry collector into your cluster that sends data to your Dash0
ingress endpoint, with authentication already configured out of the box. Additionally, it will enable gathering
OpenTelemetry data from applications deployed to the cluster for a selection of supported runtimes.
The Dash0 Kubernetes operator enables gathering OpenTelemetry data from your workloads for a selection of supported
runtimes.

The Dash0 Kubernetes operator is currently in beta.

Expand All @@ -18,5 +19,5 @@ Supported runtimes:

## Getting Started

TODO Describe installation via Helm etc.

The preferred method of installation is via the operator's
[Helm chart](https://github.com/dash0hq/dash0-operator/helm-chart/dash0-operator/README.md).
46 changes: 33 additions & 13 deletions cmd/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import (
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"

corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/kubernetes"
Expand Down Expand Up @@ -138,34 +139,53 @@ func startOperatorManager(
return fmt.Errorf("unable to create the clientset client")
}

operatorVersion, isSet := os.LookupEnv("DASH0_OPERATOR_VERSION")
operatorImage, isSet := os.LookupEnv("DASH0_OPERATOR_IMAGE")
if !isSet {
operatorVersion = "unknown"
return fmt.Errorf("cannot start Dash0 operator, the mandatory environment variable " +
"\"DASH0_OPERATOR_IMAGE\" is missing")
}
initContainerImageVersion, isSet := os.LookupEnv("DASH0_INIT_CONTAINER_IMAGE_VERSION")

initContainerImage, isSet := os.LookupEnv("DASH0_INIT_CONTAINER_IMAGE")
if !isSet {
return fmt.Errorf("cannot start Dash0 operator, the mandatory environment variable " +
"\"DASH0_INIT_CONTAINER_IMAGE_VERSION\" is missing")
"\"DASH0_INIT_CONTAINER_IMAGE\" is missing")
}

initContainerImagePullPolicyRaw := os.Getenv("DASH0_INIT_CONTAINER_IMAGE_PULL_POLICY")
var initContainerImagePullPolicy corev1.PullPolicy
if initContainerImagePullPolicyRaw == "" {
if initContainerImagePullPolicyRaw == string(corev1.PullAlways) ||
initContainerImagePullPolicyRaw == string(corev1.PullIfNotPresent) ||
initContainerImagePullPolicyRaw == string(corev1.PullNever) {
initContainerImagePullPolicy = corev1.PullPolicy(initContainerImagePullPolicyRaw)
} else {
setupLog.Info(
fmt.Sprintf(
"Ignoring unknown pull policy for init container image: %s.", initContainerImagePullPolicyRaw))
}
}
setupLog.Info(
"version information",
"operator version",
operatorVersion,
"init container image version",
initContainerImageVersion,
"operator image and version",
operatorImage,
"init container image and version",
initContainerImage,
"init container image pull policy override",
initContainerImagePullPolicy,
)

versions := util.Versions{
OperatorVersion: operatorVersion,
InitContainerImageVersion: initContainerImageVersion,
images := util.Images{
OperatorImage: operatorImage,
InitContainerImage: initContainerImage,
InitContainerImagePullPolicy: initContainerImagePullPolicy,
}

if err = (&controller.Dash0Reconciler{
Client: mgr.GetClient(),
ClientSet: clientSet,
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("dash0-controller"),
Versions: versions,
Images: images,
}).SetupWithManager(mgr); err != nil {
return fmt.Errorf("unable to set up the Dash0 reconciler: %w", err)
}
Expand All @@ -174,7 +194,7 @@ func startOperatorManager(
if os.Getenv("ENABLE_WEBHOOKS") != "false" {
if err = (&dash0webhook.Handler{
Recorder: mgr.GetEventRecorderFor("dash0-webhook"),
Versions: versions,
Images: images,
}).SetupWebhookWithManager(mgr); err != nil {
return fmt.Errorf("unable to create the Dash0 webhook: %w", err)
}
Expand Down
8 changes: 4 additions & 4 deletions config/manager/manager.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,10 @@ spec:
image: dash0-operator-controller:latest
name: manager
env:
- name: DASH0_OPERATOR_VERSION
value: 1.0.0
- name: DASH0_INIT_CONTAINER_IMAGE_VERSION
value: 1.0.0
- name: DASH0_OPERATOR_IMAGE
value: dash0-operator-controller:1.0.0
- name: DASH0_INIT_CONTAINER_IMAGE
value: dash0-instrumentation:1.0.0

# Note: Use "imagePullPolicy: Never" when only building the image locally without pushing them anywhere. Omit
# the attribute otherwise to use the default pull policy.
Expand Down
8 changes: 8 additions & 0 deletions helm-chart/dash0-operator/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
.DS_Store
.gitignore
*.swp
*.bak
*.tmp
*.orig
*~
tests/
22 changes: 22 additions & 0 deletions helm-chart/dash0-operator/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
apiVersion: v2
name: dash0-operator
version: "1.0.0"
description: The Dash0 Kubernetes Operator makes observability easy for every Kubernetes setup, simply install the operator into your cluster to get OpenTelemetry data flowing from your applications and infrastructure to Dash0.
type: application
keywords:
- OpenTelemetry
- Observability
- Monitoring
- Dash0
- Node.js
# - Python
# - Java
# - .NET
home: https://www.dash0.com/
sources:
- https://github.com/dash0hq/dash0-operator
maintainers:
- name: Bastian Krol
email: [email protected]
icon: icon/logo.svg
appVersion: "1.0.0"
Loading