Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: introduce golden tests in kong chart #953

Closed
wants to merge 2 commits into from

Conversation

czeslavo
Copy link
Contributor

@czeslavo czeslavo commented Nov 22, 2023

What this PR does / why we need it:

Introduces golden test case that renders kong/kong chart templates using test ci/*-values.yaml and compares them with golden outputs stored in charts/kong/test/testdata/golden directory. The golden files can be regenerated with make test.golden.update.

It should allow us to easily inspect changes in the output manifests that we make by introducing changes to templating, default values, etc., and catch potential mistakes.

Which issue this PR fixes

Follow up to #947.

Special notes for your reviewer:

Checklist

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]

  • PR is based off the current tip of the main branch.
  • Changes are documented under the "Unreleased" header in CHANGELOG.md
  • New or modified sections of values.yaml are documented in the README.md
  • Commits follow the Kong commit message guidelines

@czeslavo czeslavo self-assigned this Nov 22, 2023
@czeslavo czeslavo mentioned this pull request Nov 22, 2023
4 tasks
@czeslavo czeslavo force-pushed the tests/introduce-golden branch 3 times, most recently from 90186ba to c1e6ef8 Compare November 22, 2023 11:48
@czeslavo czeslavo marked this pull request as ready for review November 22, 2023 11:59
@czeslavo czeslavo requested a review from a team as a code owner November 22, 2023 11:59
@czeslavo czeslavo force-pushed the tests/introduce-golden branch 2 times, most recently from 587b96d to e2c8cc6 Compare November 22, 2023 12:10
@czeslavo czeslavo marked this pull request as draft November 22, 2023 12:18
@czeslavo czeslavo force-pushed the tests/introduce-golden branch 2 times, most recently from 1f547f3 to ef1689d Compare November 22, 2023 12:29
@czeslavo czeslavo marked this pull request as ready for review November 22, 2023 12:52
@rainest
Copy link
Contributor

rainest commented Nov 22, 2023

Is there a way to have it run automatically and add the diff set (either as an actual commit or as a PR comment)? Or trigger it on demand?

KIC changes often don't result in a change to the end output Kong configuration, but changes to the templates almost always do unless they're outside test coverage. Refactors that aren't expected to result in changes (such as the consolidation of Services into a shared function) are more the exception than the rule.

@czeslavo
Copy link
Contributor Author

czeslavo commented Nov 22, 2023

Is there a way to have it run automatically and add the diff set (either as an actual commit or as a PR comment)? Or trigger it on demand?

Do you mean some workflow that would automatically commit changes when golden tests fail so that no one has to run make test.golden.update locally and push the changes? Probably it's doable, but will require some work.

I wouldn't like to block #947 on it. If you agree I could try improving the process by adding the automatic commits in a separate PR. Let's just take a look at the diff I generated on this branch for #947 and not depend on it. I'll try to come up with something more developer-friendly.

@pmalek
Copy link
Member

pmalek commented Dec 7, 2023

Do you mean some workflow that would automatically commit changes when golden tests fail so that no one has to run make test.golden.update locally and push the changes?

I would suggest us not try to automate this to an extent where no developer action is necessary to update the golden files because that IMHO defeats the purpose of those. Having a manual target like test.golden.update would hit the spot IMO because then you have to run it after performing changes that did change the test templates.

@czeslavo
Copy link
Contributor Author

czeslavo commented Dec 8, 2023

I would suggest us not try to automate this to an extent where no developer action is necessary to update the golden files because that IMHO defeats the purpose of those. Having a manual target like test.golden.update would hit the spot IMO because then you have to run it after performing changes that did change the test templates.

Yeah, that was the initial idea to make it work the same way it works in KIC. That is basically what this PR introduces. I understand that Travis' point is that it will be too much to remember to run make test.golden.update after every template change. I have invested no time in implementing automation around that so if we could agree it's fine to have it as it is I'd be more than happy. 😉

@tao12345666333
Copy link
Member

I agree with this approach, which can indeed help us move forward more safely.

I am considering another possibility, whether introducing tools like helm-unittest can achieve the same effect.
https://github.com/helm-unittest/helm-unittest

@pmalek
Copy link
Member

pmalek commented Dec 18, 2023

I agree with this approach, which can indeed help us move forward more safely.

I am considering another possibility, whether introducing tools like helm-unittest can achieve the same effect. https://github.com/helm-unittest/helm-unittest

This looks interesting 🤔 One major difference that I can see immediately between helm-unittest and this PR is that here we'd be diffing and asserting on the full contents of generated templates where in the former we'd only be testing what we define in our test cases. This has both pros and cons.

I'd love to hear @czeslavo's thoughts on this as he's the one that invested his time in proposing this framework.

@czeslavo
Copy link
Contributor Author

czeslavo commented Dec 18, 2023

I didn't consider helm-unittest back when implementing this. Indeed its Snapshot testing looks like something that we could use to achieve similar results. I'll give it a go and report back here.

@czeslavo
Copy link
Contributor Author

I tried using helm-unittest to achieve the same mechanics as I proposed in this PR (test all ci/ values files by rendering templates for all of them and keeping golden files/snapshots), but it's not capable of doing it as of now. The main limitation I bumped into was the lack of ability to ignore dynamically generated content when comparing/storing snapshots which simply made the tests always fail because of generated passwords/certificates. Besides that, it also requires creating a YAML file with test definitions for every test suite while I think we would prefer to simply render all templates for a given set of values files like in the proposed solution.

I also discovered we're not the first to bump into the same limitations - there's helm-chartsnap tool that was created because of similar reasons (see motivations). I tried using that as well and it almost hits the spot for us. It allows ignoring dynamically generated data, but this has to be defined for each values file separately, which in our case would mean lots of duplication again. I created an issue that could potentially bridge this gap for us (jlandowner/helm-chartsnap#30), but we either have to rely on the maintainer to implement this for us, or contribute ourselves.

I'd suggest going with the solution we have in this PR (it's not that much code in the end) and considering switching to helm-chartsnap once the issue I created is resolved, WDYT?

@pmalek
Copy link
Member

pmalek commented Dec 18, 2023

That makes sense. Thanks for checking those @czeslavo! 🙇

In that case let's go with the approach proposed in this PR and see how it works for us in the long run.

Makefile Outdated Show resolved Hide resolved
charts/kong/tests/golden_test.go Outdated Show resolved Hide resolved
// invoking this test (or using `make test.golden.update`).
func TestGolden(t *testing.T) {
helm.AddRepo(t, &helm.Options{}, "bitnami", "https://charts.bitnami.com/bitnami")
_, err := helm.RunHelmCommandAndGetOutputE(t, &helm.Options{}, "dependencies", "build", "../", "--skip-refresh")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to include the mention of helm dependencies build being called and the reason for that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good point. Added a comment. 👍

@czeslavo czeslavo force-pushed the tests/introduce-golden branch 2 times, most recently from bff7684 to 44c13bd Compare December 19, 2023 09:19
@czeslavo czeslavo force-pushed the tests/introduce-golden branch from 44c13bd to fe6e2e7 Compare December 19, 2023 09:30
@czeslavo
Copy link
Contributor Author

czeslavo commented Dec 21, 2023

In the meantime, our issue in the helm-chartsnap (jlandowner/helm-chartsnap#30) was resolved. I posted an alternative PR that uses it instead of writing the Go-based test. PTAL: #978

@czeslavo
Copy link
Contributor Author

czeslavo commented Jan 9, 2024

Closed in favor of #978.

@czeslavo czeslavo closed this Jan 9, 2024
@czeslavo czeslavo deleted the tests/introduce-golden branch February 3, 2025 12:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants