使用Golang API时,随着并发用户数量的增加,响应时间变得更长。

huangapple go评论88阅读模式
英文:

Golang API giving higher response time with increasing number of concurrent users

问题

我在使用golang中的并发HTTP连接时遇到了一些问题。请仔细阅读整个问题,由于实际代码相当长,我将使用伪代码。

简而言之,我需要创建一个单一的API,它将内部调用其他5个API,统一它们的响应,并将它们作为单一响应发送。我使用goroutine调用这5个内部API,并设置超时时间,并使用通道确保每个goroutine都已完成,然后统一它们的响应并返回。

当我进行本地测试时,一切都很顺利,我的响应时间大约为300毫秒,这非常好。

问题出现在我使用200个用户进行负载测试时,我的响应时间高达7-8秒。我认为这可能与HTTP客户端等待资源有关,因为我们正在运行大量的goroutine。

例如,一个API会启动5个goroutine,所以如果每个200个用户以每秒5个请求的速度进行API请求,那么goroutine的总数会变得非常高。这只是我的假设。

p.s.通常情况下,我构建的API的响应时间非常好,我使用了所有的缓存等技术,任何大于400毫秒的响应时间都不应该出现。

那么,请问有人可以告诉我如何解决当并发用户数量增加时响应时间增加的问题吗?

Locust测试报告:
使用Golang API时,随着并发用户数量的增加,响应时间变得更长。

伪代码:

简单路由

group.POST("/test", controller.testHandler)

控制器

type Worker struct {
    NumWorker int
    Data      chan structures.Placement
}

e := Worker{
    NumWorker: 5, // worker goroutine的数量
    Data:      make(chan, 5), // 缓冲区大小
}

// 调用goroutine
for i := 0; i < e.NumWorker; i++ {
    // 做一些虚拟工作
    wg.Add(1)
    go ad.GetResponses(params, chan, &wg) // 在通道中进行HTTP调用并返回响应
}

for v := range resChan {
    // 统一所有的响应,并作为我们的响应返回
    switch v.Type {
    case A:
        finalResponse.A = v
    case B:
        finalResponse.B = v
    } 
}

return finalResponse

HTTP客户端请求

// 我使用一个带有自定义传输的全局HTTP客户端,以便能够有效地使用资源
var client *http.Client

func init() {
    tr := &http.Transport{
        MaxIdleConnsPerHost: 1024,
        TLSHandshakeTimeout: 0 * time.Second,
    }

    tr.MaxIdleConns = 100
    tr.MaxConnsPerHost = 100
    tr.MaxIdleConnsPerHost = 100

    client = &http.Client{Transport: tr, Timeout: 10 * time.Second}
}

func GetResponses(params, chan, wg) {
    res = client.Do(req)
    chan <- res
}
英文:

I am having some problems with the concurrent HTTP connection in the golang. Kindly read the whole question, and as the actual code is quite long, I am using pseudocode

In short, I have to create a single API, which will internally call 5 other APIs, unify their response, and send them as a single response.
I am using goroutines to call those 5 internal APIs along with timeout, and using channels to ensure that every goroutine has been completed, then I unify their response, and return the same.

Things are going fine when I do local testing, my response time is around 300ms, which is pretty good.

The problem arises when I do the locust load testing of 200 users, then my response time go as high as 7 8 sec. I am thinking it has to do with the HTTP client waiting for the resources as we are running a high number of goroutines.

like 1 API spin up 5 go-routine, so if each of 200 users makes API requests at the rate of supposing 5 req/sec. Then a total number of goroutines goes way higher. Again this is my assumption only

> p.s. normally the API I am building is pretty good in response time,
> I am using all the caching and stuff and any response greater than
> 400ms should not be the case
>
>
>
> So can anyone please tell me how can I tackle this problem of
> increasing response time when number of concurrent users increases


Locust test report
使用Golang API时,随着并发用户数量的增加,响应时间变得更长。


pseudo code

simple route

group.POST(&quot;/test&quot;, controller.testHandler)

controller

type Worker struct {
    NumWorker int
    Data      chan structures.Placement
}

e := Worker{
    NumWorker: 5, // Number of worker goroutine(s)
    Data:      make(chan, 5) /* Buffer Size */),
}

//call the goroutines along with the 
for i := 0; i &lt; e.NumWorker; i++ {

    // Do some fake work
    wg.Add(1)
    go ad.GetResponses(params ,chan ,  &amp;wg) //making HHTP call and returning the response in the channel

}

for v := range resChan {
    //unifying all the response, and return the same as our response
    switch v.Tyoe{
    case A :
        finalResponse.A = v
    case B
        finalResponse.B  = v
    } 
}

return finalResponse



Request HTTP client

//i am using a global http client with custom transport , so that i can effectively use the resources
var client *http.Client

func init() {
	tr := &amp;http.Transport{
		MaxIdleConnsPerHost: 1024,
		TLSHandshakeTimeout: 0 * time.Second,
	}

	tr.MaxIdleConns = 100
	tr.MaxConnsPerHost = 100
	tr.MaxIdleConnsPerHost = 100

	client = &amp;http.Client{Transport: tr, Timeout: 10 * time.Second}
}



func GetResponses(params , chan  ,wg){
    res = client.Do(req)
    chan &lt;- res

}

答案1

得分: 1

所以我进行了一些调试和跨度监控,结果发现Redis是问题的罪魁祸首。你可以在这里查看详细信息:https://stackoverflow.com/a/70902382/9928176

为了让你了解我是如何解决这个问题的,

英文:

So I have done some debugging and span monitoring , and turns out redis was the culprit in this. You can see this https://stackoverflow.com/a/70902382/9928176

To get an idea how I solved it

huangapple
  • 本文由 发表于 2022年1月20日 01:48:30
  • 转载请务必保留本文链接:https://go.coder-hub.com/70775416.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定