Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.6.3 #2231

Closed
26 tasks done
tenzen-y opened this issue May 20, 2024 · 11 comments
Closed
26 tasks done

Release v0.6.3 #2231

tenzen-y opened this issue May 20, 2024 · 11 comments
Assignees

Comments

@tenzen-y
Copy link
Member

tenzen-y commented May 20, 2024

Release Checklist

  • OWNERS must LGTM the release proposal.
    At least two for minor or major releases. At least one for a patch release.
  • Verify that the changelog in this issue and the CHANGELOG folder is up-to-date
  • For major or minor releases (v$MAJ.$MIN.0), create a new release branch.
    • An OWNER creates a vanilla release branch with
      git branch release-$MAJ.$MIN main
    • An OWNER pushes the new release branch with
      git push release-$MAJ.$MIN
  • Update the release branch:
    • Update RELEASE_BRANCH and RELEASE_VERSION in Makefile and run make prepare-release-branch
    • Update the CHANGELOG
    • Submit a pull request with the changes: Prepare release v0.6.3 #2292
  • An OWNER prepares a draft release
    • Write the change log into the draft release.
    • Run
      make artifacts IMAGE_REGISTRY=registry.k8s.io/kueue GIT_TAG=$VERSION
      to generate the artifacts and upload the files in the artifacts folder
      to the draft release.
  • An OWNER creates a signed tag running
    git tag -s $VERSION
    and inserts the changelog into the tag description.
    To perform this step, you need a PGP key registered on github.
  • An OWNER pushes the tag with
    git push $VERSION
    • Triggers prow to build and publish a staging container image
      gcr.io/k8s-staging-kueue/kueue:$VERSION
  • Submit a PR against k8s.io,
    updating registry.k8s.io/images/k8s-staging-kueue/images.yaml to
    promote the container images
    to production: Promote Kueue v0.6.3 kubernetes/k8s.io#6846
  • Wait for the PR to be merged and verify that the image registry.k8s.io/kueue/kueue:$VERSION is available.
  • Publish the draft release prepared at the GitHub releases page.
    Link: https://github.com/kubernetes-sigs/kueue/releases/tag/v0.6.3
  • Run the openvex action to generate openvex data. The action will add the file to the release artifacts.
  • Run the SBOM action to generate the SBOM and add it to the release.
  • For major or minor releases, merge the main branch into the website branch to publish the updated documentation.
  • Send an announcement email to [email protected] and [email protected] with the subject [ANNOUNCE] kueue $VERSION is released.
  • Update the below files with respective values in main branch :
    • Latest version in README.md
    • Release notes in the CHANGELOG
    • version in site/config.toml
    • appVersion in charts/kueue/Chart.yaml
    • last-updated, last-reviewed, commit-hash, project-release, and distribution-points in SECURITY-INSIGHTS.yaml
  • For a major or minor release, prepare the repo for the next version:
    • create an unannotated devel tag in the
      main branch, on the first commit that gets merged after the release
      branch has been created (presumably the README update commit above), and, push the tag:
      DEVEL=v0.$(($MAJ+1)).0-devel; git tag $DEVEL main && git push $DEVEL
      This ensures that the devel builds on the main branch will have a meaningful version number.
    • Create a milestone for the next minor release and update prow to set it automatically for new PRs:

Changelog

Changes since `v0.6.2`:

### Feature

- Improve the kubectl output for workloads using admission checks. (#2014, @vladikkuzn)

### Bug or Regression

- Change the default pprof port to 8083 to fix a bug that causes conflicting listening ports between pprof and the visibility server. (#2232, @amy)
- Check the containers limits for used resources in provisioning admission check controller and include them in the ProvisioningRequest as requests (#2293, @trasc)
- Consider deleted pods without `spec.nodeName` inactive and subject for pod replacement. (#2217, @trasc)
- Fix a bug that causes the reactivated Workload to be immediately deactivated even though it doesn't exceed the backoffLimit. (#2220, @tenzen-y)
- Fix a bug that the ".waitForPodsReady.requeuingStrategy.backoffLimitCount" is ignored when the ".waitForPodsReady.requeuingStrategy.timestamp" is not set. (#2224, @tenzen-y)
- Fix chart values configuration for the number of reconcilers for the Pod integration. (#2050, @alculquicondor)
- Fix handling of eviction in StrictFIFO to ensure the evicted workload is in the head.
  Previously, in case of priority-based preemption, it was possible that the lower-priority
  workload might get admitted while the higher priority workload is being evicted. (#2081, @mimowo)
- Fix preemption algorithm to reduce the number of preemptions within a ClusterQueue when reclamation is not possible, and when using .preemption.borrowWithinCohort (#2111, @alculquicondor)
- Fix support for MPIJobs when using a ProvisioningRequest engine that applies updates only to worker templates. (#2281, @trasc)
- Fix support for jobset v0.5.x (#2271, @alculquicondor)
- Fix the resource requests computation taking into account sidecar containers. (#2159, @IrvingMg)
- Helm Chart: Fix a bug that the kueue does not work with the cert-manager. (#2098, @EladDolev)
- HelmChart: Fix a bug that the `integrations.podOptions.namespaceSelector` is not propagated. (#2095, @EladDolev)
- JobFramework: The eviction by inactivation mechanism was moved to the workload controller.
  
  This fixes a problem where pod groups would remain with condition QuotaReserved set to True when replacement pods are missing. (#2229, @mbobrovskyi)
- Make the defaults for PodsReadyTimeout backoff more practical, as for the original values
  the couple of first requeues made the impression as immediate on users (below 10s, which 
  is negligible to the wait time spent waiting for PodsReady). 
  
  The defaults values for the formula to determine the exponential back are changed as follows:
  - base `1s -> 10s`
  - exponent: `1.41284738 -> 2`
  So, now the consecutive times to requeue a workload are: 10s, 20s, 40s, ... (#2033, @mimowo)
- MultiKueue: Fix a bug that could delay the joining clusters when it's MultiKueueCluster is created. (#2167, @trasc)
- Prevent Pod from being deleted when admitted via ProvisioningRequest that has pod updates on tolerations (#2262, @vladikkuzn)
- Use PATCH updates for pods. This fixes support for Pods when using the latest features in Kubernetes v1.29 (#2089, @mbobrovskyi)

### Other (Cleanup or Flake)

- Correctly log workload status for workloads with quota reserved, but awaiting for admission checks. (#2080, @mimowo)
@mimowo
Copy link
Contributor

mimowo commented May 20, 2024

I'm thinking about including #2216 as a nice-to-have.

@tenzen-y
Copy link
Member Author

I'm thinking about including #2216 as a nice-to-have.

Isn't that not a bug? Basically, we used to cherry pick only bug fixes to patch version, right?

@mimowo
Copy link
Contributor

mimowo commented May 20, 2024

Ah, sure!

@alculquicondor
Copy link
Contributor

I added #2271 and #2260 to the list

@alculquicondor
Copy link
Contributor

I updated the release notes.
I think we can leave #2227 for the next patch release.

I'm validating something for #2278, but if it works, we can include it in the release.

@tenzen-y
Copy link
Member Author

I think we can leave #2227 for the next patch release.

I agree with you. We can say #2227 is an enhancement request to support elastic JobSet.

I'm validating something for #2278, but if it works, we can include it in the release.

Thank you for sharing your progress. So, after #2278 is resolved, we can release a patch release.

@alculquicondor
Copy link
Contributor

It doesn't work, I'll leave #2278 for the next patch release.

@alculquicondor
Copy link
Contributor

I ended up including #2278

@tenzen-y
Copy link
Member Author

I sent an announcement email to sig-scheduling and wg-batch google groups.
/close

@k8s-ci-robot
Copy link
Contributor

@tenzen-y: Closing this issue.

In response to this:

I sent an announcement email to sig-scheduling and wg-batch google groups.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants