Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformed Transformers #4296

Closed
pbecotte opened this issue Nov 17, 2021 · 9 comments
Closed

Transformed Transformers #4296

pbecotte opened this issue Nov 17, 2021 · 9 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@pbecotte
Copy link

Have been trying to set up a multi-env deployment of a complicated helm chart using kustomize. I left a comment earlier on this issue #4219 (comment) - but have made some progress. I have hit another blocker and wanted to ask for feedback.

Basically the structure I want to have is

base/
kustomization.yaml
helmCharts:
valuesInline: ...
configMapGenerator:
- name: some-extra-config

components/postgres/
kustomization.yaml
patches:
- (some patch that adds extra values to my helmchartgenerator)
- (some patch to an object the chart generates to work with postgres)
resources:
- an extra ingress for postgres

overlays/myapp/
kustomization.yaml
resources:
- ../../base
components:
- ../../components/postgres
patches:
- (env specific values for values.yaml in the helmchartgenerator)

Now, the breakthrough I had was that I finally figured out what the doc about "Transformed Transformers" was talking about. I figured out I can put HelmChartGenerator in a file in the base, and instead of including it as a "generator" I include it as a "resource" - and it will get passed up to the next level, where it can be patched whatever. In the next level, I have generators: - base and all is good. Or, almost-

The naive implementation (like the diagram) doesn't work. If I put a generators field in overlays/myapp/kustomization.yaml it will execute the generator, so I can no longer patch it - which means that the component can not modify the values. I can get some workaround by adding another kustomization such as overlays/myapp/chart that does resources: base and components: components/postgres, and THEN in my overlay do generators: ./chart

The problem now though- generators field is very strict. I can't have the chart kustomize return anything that isn't a generator- such as that extra ingress from the postgres component, or the "patch" that I need to modify an object the chart returns. I also can't return resources or transformers from the "base" directory. I am working around it with a complicated directory structure that I am 100% sure is not the best way to do it, but I keep coming back to the line in the docs

"Everything is a transformer". That doesn't seem to be true- Error: plugin ~G_builtin_HelmChartInflationGenerator|~X|unimportant not a transformer Same happens if I return a configmap. What really interested me though is that ConfigMapGenerator can be returned- and have the name hashing applied to objects upstream, which implies that the "transformer" step of that somehow gets added to the stack to get executed later.

So my question- is there a way to have a kustomize that returns resources, generators, and transformers in yaml form - and then have a parent kustomize import them and pass them to the appropriate fields (instead of needing three kustomizations, one for resources, one for transformers, and one for generators)?

@k8s-ci-robot k8s-ci-robot added needs-kind Indicates a PR lacks a `kind/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 17, 2021
@k8s-ci-robot
Copy link
Contributor

@pbecotte: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kferrone
Copy link

I have the same quandry. I also had the idea of kustomize returning all resources to a layer which would sort them to correct place ie transformers, generators, and resources, etc. I no longer like this idea as it seems complicated and strange the more I think about it.

I would prefer a special annotation on generators and transformers like:
config.kubernetes.io/scope: transformers
Then have kustomize determine the transformer with this annotation must run against the transformers resources which will be run as normal transformers. Then it will be considered a transformer of transformers.

@natasha41575
Copy link
Contributor

natasha41575 commented Dec 31, 2021

The naive implementation (like the diagram) doesn't work. If I put a generators field in overlays/myapp/kustomization.yaml it will execute the generator, so I can no longer patch it - which means that the component can not modify the values. I can get some workaround by adding another kustomization such as overlays/myapp/chart that does resources: base and components: components/postgres, and THEN in my overlay do generators: ./chart

The problem now though- generators field is very strict. I can't have the chart kustomize return anything that isn't a generator- such as that extra ingress from the postgres component, or the "patch" that I need to modify an object the chart returns. I also can't return resources or transformers from the "base" directory. I am working around it with a complicated directory structure that I am 100% sure is not the best way to do it, but I keep coming back to the line in the docs

The problem as I understand it is you have a resource (your HelmChartGenerator) being patched throughout your kustomization stack, and by the time you get to the overlay you have no way of invoking that specific generator resource through the generators field, since the generators field is expecting filepaths. Putting the entire ./charts directory doesn't work because it creates other resources that can't be treated as generators.

There is a Composition KEP that aims to tackle plugin-based workflows with transformed transformers such as yours. In the context of this KEP, a generator is just a special type of transformer. In a Composition, you can define a list of transformers to execute. You can layer these Compositions; one Composition may import another Composition and selectively override imported transformer fields - similar to how you are patching your HelmChartGenerator. You may in particular find User Story 2 relevant as an example of this. You can add a Composition to a kustomization's resources, generators, or transformers field, so in theory you would be able to have some HelmChartGenerator defined in a base composition, selectively change some fields in an overlaying Composition, and refer to the latter Composition in your kustomization's generators field.

This feature is still in progress (I believe @KnVerey is working on implementation), but some preliminary feedback on whether this will resolve your issue would be greatly appreciated.

/kind feature

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. labels Dec 31, 2021
@kferrone
Copy link

kferrone commented Jan 16, 2022

@natasha41575 Composition KEP link above is 404

@KnVerey
Copy link
Contributor

KnVerey commented Feb 4, 2022

Here's the correct link to the KEP tracking issue: Kustomize Plugin Composition API #2299.

I agree with Natasha that the problem you're facing is one of the core purposes of this KEP. I have a complete implementation open in #4323 , but it is blocked by several thorny issues linked in that PR. One of those issues is another one of the problems you separately encountered: that Kustomize internally makes a hard distinction between its own built-in transformers and generators due to an implementation detail -- #4403 . I'm not personally working on this right now, but I plan to come back to it later in the year, and any contributions to clearing the blocker issues would be welcome.

What really interested me though is that ConfigMapGenerator can be returned- and have the name hashing applied to objects upstream, which implies that the "transformer" step of that somehow gets added to the stack to get executed later.

I'm not sure what's going on here, because that generator has the same problem with its implementation, and I get an error with it in the transformers field: Error: plugin ~G_builtin_ConfigMapGenerator|~X|mymap not a transformer. Maybe you mean that if you have a cm generator imported (in the generators field) at some level, references to the resulting resource will be globally corrected. That's correct, but it's because of the name reference transformer, nothing special about cm generator.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 4, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

6 participants