micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server “`

huangapple go评论76阅读模式
英文:

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server to set pod request/limits

问题

micrometer将执行器指标暴露给K8s中的pod,以在K8s Mixin中设置请求/限制,与metrics-server和kube-state-metrics相比,从kube-prometheus-stack的Grafana仪表板中获得K8s Mixin。
对我来说,很难理解标题中的这3个值之间为什么会有如此大的差异,以及如何利用K8s Mixin来设置适当的请求/限制,是否可以期望出现这种情况。
我希望我可以在输入kubectl top podname --containers时看到与我在打开K8s -> ComputeResources -> Pods仪表板在Grafana中看到的相同数据。但不仅值相差一倍以上,而且来自执行器的报告值也与两者不同。
当使用micrometer暴露spring数据时,jvm_memory_used_bytes的总和更接近于我从metrics-server (0.37.0) 得到的值,而不是我从mixin仪表板在Grafana中看到的值,但仍然相差很远。
我使用的是由kubespray管理的Ubuntu 18.04 LTS上的K8s 1.14.3。
kube-prometheus-stack 9.4.4是使用helm 2.14.3安装的。
使用Spring Boot 2.0和Micrometer。我在metrics-server的git上看到了关于这是kubelet用于OOMKill的值的解释,但这对于我来说一点帮助也没有,因为我应该如何处理仪表板?有什么方法可以处理这种情况?

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server
“`

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server
“`

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server
“`

英文:

micrometer exposing actuator metrics to set request/limit to pods in K8svs metrics-server vs kube-state-metrics -> K8s Mixin from kube-promethteus-stack Grafana dashboad
It's really blurry and frustrating to me to understand why there is such big difference between values from the 3 in the title and how should one utilize K8s Mixin to set proper request/limits and if that is expected at al.
I was hoping I can just see same data that I see when I type kubectl top podname --containers to what I see when I open K8s -> ComputeResources -> Pods dashboard in Grafana. But not only the values differ by more than a double, but also reported values from actuator differ from both.
When exposing spring data with micrometer the sum of jvm_memory_used_bytes is corresponding more to what I get from metrics-server (0.37.0) rather then what I see on Grafana from the mixin dashboards, but it is still far off.
I am using K8s: 1.14.3 on Ubuntu 18.04 LTS managed by kubespray.
kube-prometheus-stack 9.4.4 installed with helm 2.14.3.
Spring boot 2.0 with Micrometer. I saw the explanation on metrics-server git that this is the value that kubelet use for OOMKill, but again this is not helpful at all as what should I do with the dashboard? What is the the way to handle this?

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server
“`

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server
“`

micrometer exposing actuator metrics vs kube-state-metrics vs metrics-server
“`

答案1

得分: 1

根据我目前的观察,我已经找到了根本原因,将 kubelet 服务从旧图表重命名为新图表,可以被 serviceMonitors 定位到。因此,对我来说,最佳解决方案将是使用 Grafana kube-state-metrics,并比较我在 JVM 仪表板中所看到的内容。

英文:

Based on what I see so far, I have found the root cause, renamed kubelet service from old chart to new that can get targeted by serviceMonitors. So for me the best solution would be grafana kube-state-metrics + comparing what I see in the jvm dashboard

huangapple
  • 本文由 发表于 2020年10月15日 22:04:37
  • 转载请务必保留本文链接:https://go.coder-hub.com/64373331.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定