Golang中的方法级执行时间指标是什么?

huangapple go评论104阅读模式
英文:

Method-level execution time metrics in Golang?

问题

我对Go语言还不太熟悉,我想知道在Go中是否有像Java中的AOP一样的方法来收集方法级别的执行时间指标?

最好的情况是这样的代码不会放在常规的业务逻辑代码中。

我不想对一个应用进行性能分析,我指的是真正的生产就绪的指标,可以导出到Graphite等工具中,这样我就可以监控响应时间的直方图等。

英文:

I am quite new to Go and I was wondering if there are any nice (like AOP in Java) ways to gather method-level execution time metrics in Go?

It would be best if such code would not be placed inside regular business logic code.

I don't want to profile an app. I mean real production ready metrics that could be exported to Graphite etc. so I could monitor response time histograms and such.

答案1

得分: 1

你可以使用Go解析器重写源代码并添加指令。这就是godebug和覆盖工具的工作原理:https://github.com/mailgun/godebug。

显然,这将是一项很大的工作,但我认为你的方法论本身就存在缺陷。测量所有东西意味着你的程序将花费更多的时间来测量而不是实际工作。这就是为什么性能分析只进行采样。

听起来你可能正在开发一个HTTP项目?你可以通过包装所有的http.Handler或使用像Negroni这样的框架来轻松地对代码进行仪器化。这里有一个示例,展示了类似的做法。Go还有expvar包,有时对计数器等功能很有用。

另外值得考虑的是使用一个statsd客户端(可以将数据发送到Graphite)。这里有一个可以做到这一点的包:godspeed*。调用非常简单:

package main

import (
	"fmt"
	"net/http"
	"time"

	"github.com/PagerDuty/godspeed"
	"github.com/codegangsta/negroni"
)

func statsdMiddleware(w http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
	start := time.Now()
	next(w, r)
	elapsed := float64(time.Now().Sub(start)) / float64(time.Millisecond)

	g, err := godspeed.NewDefault()
	if err != nil {
		return
	}
	defer g.Conn.Close()

	g.Histogram("http.response.time_ms", elapsed, []string{"path:" + r.URL.Path})
}

func main() {
	mux := http.NewServeMux()
	mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
		fmt.Fprintf(w, "Welcome to the home page!")
	})

	n := negroni.Classic()
	n.Use(negroni.HandlerFunc(statsdMiddleware))
	n.UseHandler(mux)
	n.Run(":3000")
}

这种方法也适用于标准的TCP服务器。对于读取/写入队列的流水线,我通常会测量我读取数据的速度以及将数据写入另一端的速度,然后使用背压测量中间步骤。(这有助于我找出哪个步骤导致问题)

免责声明:我在DataDog工作

英文:

You could rewrite source code using the go parser and add the instructions. This is how godebug and the coverage tool work: https://github.com/mailgun/godebug.

Obviously that would be a lot of work, but I think your methodology is flawed anyway. Measuring everything means your program will spend far more time measuring than actually doing work. This is why profiling only samples.

It sounds like perhaps you're working on an HTTP project? You could easily instrument your code by wrapping all your http.Handlers or using a framework like Negroni. Here's an example of someone doing something similar. Go also has the expvar package which is sometimes useful for counters and the like.

Also worth considering is using a statsd client (which can get your data into Graphite). Here's one package which can do that: godspeed*. Calls are pretty easy:

package main

import (
	"fmt"
	"net/http"
	"time"

	"github.com/PagerDuty/godspeed"
	"github.com/codegangsta/negroni"
)

func statsdMiddleware(w http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
	start := time.Now()
	next(w, r)
	elapsed := float64(time.Now().Sub(start)) / float64(time.Millisecond)

	g, err := godspeed.NewDefault()
	if err != nil {
		return
	}
	defer g.Conn.Close()

	g.Histogram("http.response.time_ms", elapsed, []string{"path:" + r.URL.Path})
}

func main() {
	mux := http.NewServeMux()
	mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
		fmt.Fprintf(w, "Welcome to the home page!")
	})

	n := negroni.Classic()
	n.Use(negroni.HandlerFunc(statsdMiddleware))
	n.UseHandler(mux)
	n.Run(":3000")
}

This approach works for standard TCP servers as well. For pipelines reading / writing queues I usually measure how fast I'm reading in data and writing it out the other end and then use back-pressure gauges for the intermediate steps. (which helps me find out which step is causing the problem)

Disclaimer: I work at DataDog

huangapple
  • 本文由 发表于 2015年10月4日 20:24:05
  • 转载请务必保留本文链接:https://go.coder-hub.com/32933423.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定