Skip to content

Commit

Permalink
Merge pull request #39124 from kubernetes/dev-1.27
Browse files Browse the repository at this point in the history
Official 1.27 Release Docs
  • Loading branch information
mickeyboxell authored Apr 11, 2023
2 parents 8057934 + 2e403eb commit 3397eba
Show file tree
Hide file tree
Showing 143 changed files with 18,827 additions and 11,450 deletions.
19,962 changes: 11,717 additions & 8,245 deletions api-ref-assets/api/swagger.json

Large diffs are not rendered by default.

5 changes: 5 additions & 0 deletions api-ref-assets/config/fields.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@
- initContainerStatuses
- containerStatuses
- ephemeralContainerStatuses
- resize

- definition: io.k8s.api.core.v1.Container
field_categories:
Expand Down Expand Up @@ -127,6 +128,7 @@
- name: Resources
fields:
- resources
- resizePolicy
- name: Lifecycle
fields:
- lifecycle
Expand Down Expand Up @@ -219,6 +221,9 @@
fields:
- volumeMounts
- volumeDevices
- name: Resources
fields:
- resizePolicy
- name: Lifecycle
fields:
- terminationMessagePath
Expand Down
19 changes: 14 additions & 5 deletions api-ref-assets/config/toc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -66,18 +66,18 @@ parts:
- name: PriorityClass
group: scheduling.k8s.io
version: v1
- name: PodScheduling
- name: PodSchedulingContext
group: resource.k8s.io
version: v1alpha1
version: v1alpha2
- name: ResourceClaim
group: resource.k8s.io
version: v1alpha1
version: v1alpha2
- name: ResourceClaimTemplate
group: resource.k8s.io
version: v1alpha1
version: v1alpha2
- name: ResourceClass
group: resource.k8s.io
version: v1alpha1
version: v1alpha2
- name: Service Resources
chapters:
- name: Service
Expand Down Expand Up @@ -148,6 +148,12 @@ parts:
- name: CertificateSigningRequest
group: certificates.k8s.io
version: v1
- name: ClusterTrustBundle
group: certificates.k8s.io
version: v1alpha1
- name: SelfSubjectReview
group: authentication.k8s.io
version: v1beta1
- name: Authorization Resources
chapters:
- name: LocalSubjectAccessReview
Expand Down Expand Up @@ -191,6 +197,9 @@ parts:
- name: PodDisruptionBudget
group: policy
version: v1
- name: IPAddress
group: networking.k8s.io
version: v1alpha1
- name: Extend Resources
chapters:
- name: CustomResourceDefinition
Expand Down
1 change: 1 addition & 0 deletions content/en/blog/_posts/2022-05-13-grpc-probes-in-beta.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ slug: grpc-probes-now-in-beta

**Author**: Sergey Kanzhelev (Google)

_Update: Since this article was posted, the feature was graduated to GA in v1.27 and doesn't require any feature gates to be enabled.

With Kubernetes 1.24 the gRPC probes functionality entered beta and is available by default.
Now you can configure startup, liveness, and readiness probes for your gRPC app
Expand Down
48 changes: 21 additions & 27 deletions content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,15 @@ For self-registration, the kubelet is started with the following options:
{{< glossary_tooltip text="taints" term_id="taint" >}} (comma separated `<key>=<value>:<effect>`).

No-op if `register-node` is false.
- `--node-ip` - IP address of the node.
- `--node-ip` - Optional comma-separated list of the IP addresses for the node.
You can only specify a single address for each address family.
For example, in a single-stack IPv4 cluster, you set this value to be the IPv4 address that the
kubelet should use for the node.
See [configure IPv4/IPv6 dual stack](/docs/concepts/services-networking/dual-stack/#configure-ipv4-ipv6-dual-stack)
for details of running a dual-stack cluster.

If you don't provide this argument, the kubelet uses the node's default IPv4 address, if any;
if the node has no IPv4 addresses then the kubelet uses the node's default IPv6 address.
- `--node-labels` - {{< glossary_tooltip text="Labels" term_id="label" >}} to add when registering the node
in the cluster (see label restrictions enforced by the
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction)).
Expand Down Expand Up @@ -215,34 +223,20 @@ of the Node resource. For example, the following JSON structure describes a heal
]
```

If the `status` of the Ready condition remains `Unknown` or `False` for longer
than the `pod-eviction-timeout` (an argument passed to the
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager"
>}}), then the [node controller](#node-controller) triggers
{{< glossary_tooltip text="API-initiated eviction" term_id="api-eviction" >}}
for all Pods assigned to that node. The default eviction timeout duration is
**five minutes**.
In some cases when the node is unreachable, the API server is unable to communicate
with the kubelet on the node. The decision to delete the pods cannot be communicated to
the kubelet until communication with the API server is re-established. In the meantime,
the pods that are scheduled for deletion may continue to run on the partitioned node.

The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as
being in the `Terminating` or `Unknown` state. In cases where Kubernetes cannot deduce from the
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
all the Pod objects running on the node to be deleted from the API server and frees up their
names.

When problems occur on nodes, the Kubernetes control plane automatically creates
[taints](/docs/concepts/scheduling-eviction/taint-and-toleration/) that match the conditions
affecting the node.
The scheduler takes the Node's taints into consideration when assigning a Pod to a Node.
Pods can also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
them run on a Node even though it has a specific taint.

See [Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
affecting the node. An example of this is when the `status` of the Ready condition
remains `Unknown` or `False` for longer than the kube-controller-manager's `NodeMonitorGracePeriod`,
which defaults to 40 seconds. This will cause either an `node.kubernetes.io/unreachable` taint, for an `Unknown` status,
or a `node.kubernetes.io/not-ready` taint, for a `False` status, to be added to the Node.

These taints affect pending pods as the scheduler takes the Node's taints into consideration when
assigning a pod to a Node. Existing pods scheduled to the node may be evicted due to the application
of `NoExecute` taints. Pods may also have {{< glossary_tooltip text="tolerations" term_id="toleration" >}} that let
them schedule to and continue running on a Node even though it has a specific taint.

See [Taint Based Evictions](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions) and
[Taint Nodes by Condition](/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition)
for more details.

### Capacity and Allocatable {#capacity}
Expand Down
47 changes: 47 additions & 0 deletions content/en/docs/concepts/cluster-administration/system-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,6 +231,53 @@ Similar to the container logs, you should rotate system component logs in the `/
In Kubernetes clusters created by the `kube-up.sh` script, log rotation is configured by the `logrotate` tool.
The `logrotate` tool rotates logs daily, or once the log size is greater than 100MB.

## Log query

{{< feature-state for_k8s_version="v1.27" state="alpha" >}}

To help with debugging issues on nodes, Kubernetes v1.27 introduced a feature that allows viewing logs of services
running on the node. To use the feature, ensure that the `NodeLogQuery`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) is enabled for that node, and that the
kubelet configuration options `enableSystemLogHandler` and `enableSystemLogQuery` are both set to true. On Linux
we assume that service logs are available via journald. On Windows we assume that service logs are available
in the application log provider. On both operating systems, logs are also available by reading files within
`/var/log/`.

Provided you are authorized to interact with node objects, you can try out this alpha feature on all your nodes or
just a subset. Here is an example to retrieve the kubelet service logs from a node:
```shell
# Fetch kubelet logs from a node named node-1.example
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet"
```

You can also fetch files, provided that the files are in a directory that the kubelet allows for log
fetches. For example, you can fetch a log from `/var/log` on a Linux node:
```shell
kubectl get --raw "/api/v1/nodes/<insert-node-name-here>/proxy/logs/?query=/<insert-log-file-name-here>"
```

The kubelet uses heuristics to retrieve logs. This helps if you are not aware whether a given system service is
writing logs to the operating system's native logger like journald or to a log file in `/var/log/`. The heuristics
first checks the native logger and if that is not available attempts to retrieve the first logs from
`/var/log/<servicename>` or `/var/log/<servicename>.log` or `/var/log/<servicename>/<servicename>.log`.

The complete list of options that can be used are:

Option | Description
------ | -----------
`boot` | boot show messages from a specific system boot
`pattern` | pattern filters log entries by the provided PERL-compatible regular expression
`query` | query specifies services(s) or files from which to return logs (required)
`sinceTime` | an [RFC3339](https://www.rfc-editor.org/rfc/rfc3339) timestamp from which to show logs (inclusive)
`untilTime` | an [RFC3339](https://www.rfc-editor.org/rfc/rfc3339) timestamp until which to show logs (inclusive)
`tailLines` | specify how many lines from the end of the log to retrieve; the default is to fetch the whole log

Example of a more complex query:
```shell
# Fetch kubelet logs from a node named node-1.example that have the word "error"
kubectl get --raw "/api/v1/nodes/node-1.example/proxy/logs/?query=kubelet&pattern=error"
```

## {{% heading "whatsnext" %}}

* Read about the [Kubernetes Logging Architecture](/docs/concepts/cluster-administration/logging/)
Expand Down
32 changes: 21 additions & 11 deletions content/en/docs/concepts/cluster-administration/system-traces.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ weight: 90

<!-- overview -->

{{< feature-state for_k8s_version="v1.22" state="alpha" >}}
{{< feature-state for_k8s_version="v1.27" state="beta" >}}

System component traces record the latency of and relationships between operations in the cluster.

Expand Down Expand Up @@ -59,26 +59,24 @@ as the kube-apiserver is often a public endpoint.
#### Enabling tracing in the kube-apiserver
To enable tracing, enable the `APIServerTracing`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the kube-apiserver. Also, provide the kube-apiserver with a tracing configuration file
To enable tracing, provide the kube-apiserver with a tracing configuration file
with `--tracing-config-file=<path-to-config>`. This is an example config that records
spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:

```yaml
apiVersion: apiserver.config.k8s.io/v1alpha1
apiVersion: apiserver.config.k8s.io/v1beta1
kind: TracingConfiguration
# default value
#endpoint: localhost:4317
samplingRatePerMillion: 100
```

For more information about the `TracingConfiguration` struct, see
[API server config API (v1alpha1)](/docs/reference/config-api/apiserver-config.v1alpha1/#apiserver-k8s-io-v1alpha1-TracingConfiguration).
[API server config API (v1beta1)](/docs/reference/config-api/apiserver-config.v1beta1/#apiserver-k8s-io-v1beta1-TracingConfiguration).

### kubelet traces

{{< feature-state for_k8s_version="v1.25" state="alpha" >}}
{{< feature-state for_k8s_version="v1.27" state="beta" >}}

The kubelet CRI interface and authenticated http servers are instrumented to generate
trace spans. As with the apiserver, the endpoint and sampling rate are configurable.
Expand All @@ -88,10 +86,7 @@ Enabled without a configured endpoint, the default OpenTelemetry Collector recei

#### Enabling tracing in the kubelet

To enable tracing, enable the `KubeletTracing`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
on the kubelet. Also, provide the kubelet with a
[tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.25/tracing/api/v1/types.go).
To enable tracing, apply the [tracing configuration](https://github.com/kubernetes/component-base/blob/release-1.27/tracing/api/v1/types.go).
This is an example snippet of a kubelet config that records spans for 1 in 10000 requests, and uses the default OpenTelemetry endpoint:

```yaml
Expand All @@ -105,6 +100,21 @@ tracing:
samplingRatePerMillion: 100
```

If the `samplingRatePerMillion` is set to one million (`1000000`), then every
span will be sent to the exporter.

The kubelet in Kubernetes v{{< skew currentVersion >}} collects spans from
the garbage collection, pod synchronization routine as well as every gRPC
method. Connected container runtimes like CRI-O and containerd can link the
traces to their exported spans to provide additional context of information.

Please note that exporting spans always comes with a small performance overhead
on the networking and CPU side, depending on the overall configuration of the
system. If there is any issue like that in a cluster which is running with
tracing enabled, then mitigate the problem by either reducing the
`samplingRatePerMillion` or disabling tracing completely by removing the
configuration.

## Stability

Tracing instrumentation is still under active development, and may change
Expand Down
42 changes: 42 additions & 0 deletions content/en/docs/concepts/containers/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,48 @@ that Kubernetes will keep trying to pull the image, with an increasing back-off
Kubernetes raises the delay between each attempt until it reaches a compiled-in limit,
which is 300 seconds (5 minutes).

## Serial and parallel image pulls

By default, kubelet pulls images serially. In other words, kubelet sends only
one image pull request to the image service at a time. Other image pull requests
have to wait until the one being processed is complete.

Nodes make image pull decisions in isolation. Even when you use serialized image
pulls, two different nodes can pull the same image in parallel.

If you would like to enable parallel image pulls, you can set the field
`serializeImagePulls` to false in the [kubelet configuration](/docs/reference/config-api/kubelet-config.v1beta1/).
With `serializeImagePulls` set to false, image pull requests will be sent to the image service immediately,
and multiple images will be pulled at the same time.

When enabling parallel image pulls, please make sure the image service of your
container runtime can handle parallel image pulls.

The kubelet never pulls multiple images in parallel on behalf of one Pod. For example,
if you have a Pod that has an init container and an application container, the image
pulls for the two containers will not be parallelized. However, if you have two
Pods that use different images, the kubelet pulls the images in parallel on
behalf of the two different Pods, when parallel image pulls is enabled.

### Maximum parallel image pulls

{{< feature-state for_k8s_version="v1.27" state="alpha" >}}

When `serializeImagePulls` is set to false, the kubelet defaults to no limit on the
maximum number of images being pulled at the same time. If you would like to
limit the number of parallel image pulls, you can set the field `maxParallelImagePulls`
in kubelet configuration. With `maxParallelImagePulls` set to _n_, only _n_ images
can be pulled at the same time, and any image pull beyond _n_ will have to wait
until at least one ongoing image pull is complete.

Limiting the number parallel image pulls would prevent image pulling from consuming
too much network bandwidth or disk I/O, when parallel image pulling is enabled.

You can set `maxParallelImagePulls` to a positive number that is greater than or
equal to 1. If you set `maxParallelImagePulls` to be greater than or equal to 2, you
must set the `serializeImagePulls` to false. The kubelet will fail to start with invalid
`maxParallelImagePulls` settings.

## Multi-architecture images with image indexes

As well as providing binary images, a container registry can also serve a
Expand Down
Loading

0 comments on commit 3397eba

Please sign in to comment.