英文:
how to monitor request cost-time with prometheus in golang webserver
问题
我有一些要访问的URL,并且我想使用Prometheus监控每个请求的耗时。
但我不知道使用什么样的指标来收集数据。有什么帮助吗?
这是演示代码:
package main
import (
"github.com/prometheus/client_golang/prometheus"
"io/ioutil"
"net/http"
"fmt"
"time"
)
var (
resTime = prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "response_time",
Help: "cost time per request",
},
[]string{"costTime"},
)
)
func main() {
urls := []string{"http://www.google.com","http://www.google.com"}
for _,url := range urls {
request(url)
}
}
func request(url string){
startTime := time.Now()
response,_ := http.Get(url)
defer response.Body.Close()
_,err := ioutil.ReadAll(response.Body)
if err != nil {
fmt.Println(err)
}
costTime := time.Since(startTime)
resTime.WithLabelValues(fmt.Sprintf("%d", costTime)).Observe(costTime.Seconds())
}
请注意,这是一个用于演示的代码片段,你需要根据自己的需求进行适当的修改和扩展。
英文:
I have a couple of urls to access,and i want to monitor every request's cost time with prometheus.
but i don't know use what kind of metric to collect data.any help?
this is demo code:
package main
import (
"github.com/prometheus/client_golang/prometheus"
"io/ioutil"
"net/http"
"fmt"
"time"
)
var (
resTime = prometheus.NewSummaryVec(
prometheus.SummaryOpts{
Name: "response_time",
Help: "cost time per request",
},
[]string{"costTime"},
)
)
func main() {
urls := []string{"http://www.google.com","http://www.google.com"}
for _,url := range urls {
request(url)
}
}
func request(url string){
startTime := time.Now()
response,_ := http.Get(url)
defer response.Body.Close()
_,err := ioutil.ReadAll(response.Body)
if err != nil {
fmt.Println(err)
}
costTime := time.Since(startTime)
resTime.WithLabelValues(fmt.Sprintf("%d", costTime)).Observe(costTime.Seconds())
}
答案1
得分: 3
Prometheus建议您使用直方图来存储此类数据。它基本上根据请求所属的时间“桶”进行计数。在godoc中有一个如何使用直方图的示例。
我更喜欢直方图而不是“摘要”类型,因为当您有多个服务器时,更容易进行聚合。如果您只保留每个服务器上的平均/第99百分位时间,仅凭这些信息很难知道全局平均值。
直方图会保持每个服务器每个桶的运行计数,因此您可以在不丢失未来数据的情况下跨服务器聚合数据。
关于这些类型的详细介绍,请参阅此页面。
英文:
Prometheus recommends you use a histogram to store such things. It essentially counts requests based on which time "bucket" it falls into. There is an example of how to use this in the godoc.
I prefer histograms to the "summary" type, because it is easier to aggregate when you have many servers in play. If all you are keeping is the average / 99th percentile time on each server, it is hard to know the global averages from that information alone.
Histograms keep running counts per bucket per server, and so you can aggregate data across servers without significant loss in the future.
A good rundown of those types is available on this page.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论