-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lens Metrics get wrong values of CPU, RAM #7299
Comments
Do you have multiple prometheus installations installed on this cluster? |
I have only one prometheus installed by kube-prometheus-stack |
Hi have a similar problem with POD memory metric. At pod level I see the the double of the real value, while at container level I see the right value. (The pod has just one container). See attached screenshot. I also have a Prometheus stack installed with kube-prometheus-stack. Lens version: 2023.3.71735-latest It looks a regression with the latest version of Lens, because with older version of Lens I see the correct value. |
@jweak my Prometheus is bitnami/kube-prometheus chartVesion8.3.12 appVersion0.63.0 |
+1 |
These are the values being summed together: Related code:
The problem may have been introduced in this commit. |
@centromere Thanks for this. I will add a way to specify which version to use. |
+1 - I am seeing double the CPU and Memory requests for containers than what are actually requested. |
Hey guys, I'm unable to reproduce this issue with https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack installed. Could you guys share a bit more about your setup? What kube version are you using? Which version of the kube-prometheus-stack are you using? Does this bug happen with all deployments? Are you all having both node and pod metrics doubled? No need to post anything sensitive but some example would go a long way fixing, thanks! |
hello, pod memory is doubled |
Same problem here! |
Alright, I think there are two different issues. One with double pod metrics which is from the referred commit and double node metrics which is something different. We will probably have a setting that let's you change the query described in this PR #7777 The node double metrics is still a bit unclear why this happens. The query has not changed in a while. |
Any progress on this? It's been broken for quite a while? |
Not sure if it is relevant, but I'm seeing an issue where the requests for Succeeded pods (e.g. from Jobs) are being included in the Node level metrics. This means the node level graphs are always overstating the current usage. |
Maybe if I use metrics-server and kube-state-metrics together, it will double? |
I’m using Lens 2023.9.290703-Latest, Kube-Prometheus-stack version: 51.2.0 I installed in the "monitoring" namespace Kube-Prometheus-stack, I’m having problems only in the visualization of the graphics of the nodes. Doesn’t seem to double the value, doesn’t make much sense to me. All nodes there have the wrong memory value relative to the graph when we click on the node that is where it seems to be right. I’ve removed everything from namespaces that had other Prometheus installations. Everything seems to work perfect. Minus the memory graphics on the nodes page. Strange that in my other staging cluster with the same settings is working. |
That issue may occur when you have more than one kube-prometheus kubelet in namespace kube-system. You can see all services in kube-system by running |
Anyone know why this issue is marked as closed? I'm still seeing the same on my lens installation version 2024.3.70925-latest) with a desktop pro subscription. In fact it seems when I close and open lens everything has doubled again.... one of my pods is now using 90GB of memory (it's actually using 6-9)... Would be good to know if there is anything that's being worked on with this issue. If not I'm going to have to cancel that subscription :-/ and I really don't want to. Lens has all the info I need and it's presented REALLY nicely. The big problem is that it just uhh, doesn't work.... |
Describe the bug
I use the m6i.large instance (AWS), it is in the general purpose family with 2 vCPUs, 8.0 GiB RAM but in Lens I see it double (4 vCPUs, 15.265 GiB RAM)
Screenshots
Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: