-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix pod metrics in Lens for Prometheus as they display incorrect value which 2 x times bigger then actual value #7679
Comments
Hm, after testing actually I found that reason of multiplied CPU and ram usage on pod compared of sum of containers not related to thanos usage and query params, because from what I tested:
So this a bug. Will try debug tomorrow per query what is wrong
|
I found issue at
Need to add @Nokel81 sorry for bothering you, but can you please check this? Thank you in advance. |
I think we need to add a toggle for that, because we have added and then removed such a filter several times. |
@Nokel81 you mean that previously there was such filter like P.s. in general adding the option to add custom query params would be nice feature, even that not as critical as I think about it initially when was creating this issue 😊 |
Hi. I'm observing doubled metric plots for pods too. And I found out that container!="" won't help for me, as my cause of duplication is two datasets with different service in Prometheus output. Those services are "kubelet" and "prometeus-kube-prometheus-kubelet" |
I think this issue with your jobs in prometheus that you collect same metrics twice then. You need properly setup your prometheus stack. |
Same as #7679 |
1 similar comment
Same as #7679 |
Looking forward to #7777 |
I can confirm that if I run minikube with docker driver, the metric It depends on the CRI runtime, if the metric @Nokel81 That could be reason why user has some trouble in history. If #7777 would be the best solution for all. |
If you not have container label - having promql expr with {container!=""} will not break your query... |
After latest update Lens (v2023.5.310801) per container metrics broken fully 99% of time :( all of CPU\RAM\Filesystem says: |
Just for other people who struggle with latest version issues, I downgraded to 6.4.15 OpenLens to get stable monitoring tabs in Node view, Pod view, etc. All newer versions is not function properly. It anyway has issue described here, but at least other things not broken
|
Any updates issue? |
@dragoangel I have the feeling that Lens will now develop closed sourced. May not expect anything here. |
Yeah, you are totally right, as latest version of Lens is 2023.10.181418 and there are no releases here, which is bad :(. And in this version it's still same issues as was reported:
|
Try to check your kubernetes services , I think there are services that duplicated. namespace: kube-system , services like kubelet |
What would you like to be added:
Could you add an option that will allow passing custom query parameters for metrics requests?
Why is this needed:
This is required for configuring aspects of monitoring, for example, pass timeout parameter to Prometheus. Also, not having the option to set query parameters when Lens is pointed to solutions like Thanos and Prometheus HA leading to displaying metrics wrongly. Lens will show duplicated data from HA, f.e.: pod CPU and RAM usage will be multiplied by the count of replicas in HA. To display data correctly and not fail on partial_response:
dedup=1&partial_response=1
would help, but Lens does not accept query parameters inPROMETHEUS SERVICE ADDRESS
unfortunately and does not have a separate field to add them.Environment you are Lens application on:
The text was updated successfully, but these errors were encountered: