You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deploy some CRD into cluster: kubectl -f https://github.com/zalando/postgres-operator/blob/master/manifests/operatorconfiguration.crd.yaml.
Deploy some CRD defined object with k8s-handle: operatorconfigurations.acid.zalan.do where .configuration.kubernetes.custom_pod_annotations: { "keya": "valuea" } has defined.
Redeploy current object with fixed section .configuration.kubernetes.custom_pod_annotations: { "keyb": "valueb" }.
The old and new custom_pod_annotations will be merged at result, but 2020-01-09 06:15:39 INFO:k8s_handle.k8s.provisioner:OperatorConfiguration "zalando-postgres-operator" already exists, replace it be logged. (replace != merge)
kubectl -n kube-system get operatorconfigurations.acid.zalan.do zalando-postgres-operator -o jsonpath={.configuration.kubernetes.custom_pod_annotations}
map[keya:valuea keyb:valueb]
The text was updated successfully, but these errors were encountered:
ghost
changed the title
Unwanted custom resource object merging on redeploy
Unwanted custom resource object merging on redeploy occurs
Jan 9, 2020
Hi, thank you for report. @furiousassault I think we should add warning about labels and/or annotations like have with Service ports. Do you have another suggestions? Can we process labels/annotations and similar lists with replace strategy instead of merge?
Good day. The problem is k8s-handle has predefined, generalized behavior with the resources of different kinds: like, "if it's not present, create it, else replace (or merge)".
For the resources of specific kinds, it performs additional implicit spec mutations, like "ports" in implementation of replacing services, and others.
I don't know at this moment, whether we can (and should) analyze CRD object and replace its parts while merging in general, as we do with services: POC needed, not to say it's not quite clean.
As a more clean solution, we could try to see if we can develop an user-defined "strategies" of some sort, to let user choose, what to do with a resource if it possible, patch or replace. In a config.yaml / CLI / Env parameter.
That seems it would be good solution, but requires a bigger effort and discussion.
Bug exercise:
kubectl -f https://github.com/zalando/postgres-operator/blob/master/manifests/operatorconfiguration.crd.yaml
.operatorconfigurations.acid.zalan.do
where.configuration.kubernetes.custom_pod_annotations: { "keya": "valuea" }
has defined..configuration.kubernetes.custom_pod_annotations: { "keyb": "valueb" }
.The old and new custom_pod_annotations will be merged at result, but
2020-01-09 06:15:39 INFO:k8s_handle.k8s.provisioner:OperatorConfiguration "zalando-postgres-operator" already exists, replace it
be logged. (replace != merge)The text was updated successfully, but these errors were encountered: