-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Add a nodepool
or agentpool
Label to ephemeral_storage_node
Metrics
#131
Comments
nodepool
or agentpool
Label to ephemeral_storage_node
Metrics
Hi @NoamVH, I appreciate you breaking up these issues. I see your issue here, but I'm weary about adding extra labels for performance reasons. Also, I am open to adding a node filter by label. What do you think?
|
Adding a node filter by label feels like the right idea, but the environment issue may indeed pose a challenge, my Kubernetes environment is based on Azure, and when running your command I can see these labels (among others) on each node:
The According to Kubernetes' documentation, it seems like all of the labels you wrote above are then ones the Kubelet populate. From GCP's documentation, the label that is applied is I can't seem to find any documentations regarding this in AWS, but it's probably the same idea. So the best solution I can think of right now (that is maybe what all other exporters do) is to simply take all of the node labels and export them as part of the metrics. ....Unless that's what you meant from the start? |
Oh, so that is not what I meant. I wanted to limit the number of nodes queried for ephemeral metrics by filtering on k8s labels (similar to a node selector). The Prometheus label in If that doesn't work and you need all the nodes, it seems possible to hydrate the metrics with additional labels with one of these options. https://github.com/jmcgrath207/k8s-ephemeral-storage-metrics/blob/master/chart/values.yaml#L11-L19 If this solutions end up working for you, I would be interested to know to add to docs. Thanks! |
That unfortunately won't help, since what I want is to know where each metric came from, so I'll be able to do all sorts of tricks with the data such as how much of a nodepool's space does one or another service take, or seperate nodepools in order to see how services spread across my clusters, or create different tables that use that data with other metrics outside of this exporter's scope. Regarding your suggestion, I will try looking into it later this week or at the start of next week, and update accordingly. Thank you. |
Hello, Unfortunately using relabeling isn't useful in this case, since there's no good way of dynamically getting the nodepool's name from the node's name by relabeling/regex maniplulation (at least as much as I've seen). |
I have an interesting update: The reason it's been taking me so long to get to this was that I've been working on migrating my whole metrics environment from Prometheus only to Prometheus + Prometheus Operator. After looking at this issue here today (hence my previous comment), I dealt for a while with a similar problem that occured in the After spending some time looking for an elegant "Prometheus-Operator-ish" solution for this, I found this configuration in both exporters:
This would allow me to get Azure's
This This is a great solution, since it it allows simply fetching all of the labels from the nodes, and the user can decide if he's willing to do it or not and how to deal with them (though this sort of comes back to my suggestion from two months ago). Is it possible to implement something like that here? @jmcgrath207 |
Please keep the number of labels on each metric minimal. There always is the option to join in more labels from other sources / exporters and their metrics. See https://github.com/kubernetes/community/blob/afcc37186a32cc92b24e99e27089d10ae48ed0cf/contributors/devel/instrumentation.md#normalization or https://www.robustperception.io/exposing-the-software-version-to-prometheus/ I just opened issue #146 about aligning the identifying labels - this enables even easier joining removing the need to do any relabeling / |
I recently started using this exporter in my environment, however, I have a few issues and ideas with it. I will post them in seperate issues for your convenience.
The first one is that when using any of the
ephemeral_storage_node
metrics, I only getnode_name
as a label withoutagentpool
ornodepool
, so I can't easily differentiate between Kubernetes nodepools or use theby
iterator in Grafana (avg by, sort by, etc.) or combine the metric with other metrics that relates to nodes.Therefore, I think a
nodepool
oragentpool
label (preferablyagentpool
) would be great.The text was updated successfully, but these errors were encountered: