英文:
multi thread requesting in go and not getting high RPS
问题
我正在尝试编写一个多线程客户端来测试我的服务器。当我使用2个goroutine时,一切正常,我可以获得50k的RPS,并且我的CPU负载正常。但是当我创建超过2个goroutine时,RPS下降到3k,而我的CPU负载却增加了。然而,当我多次运行客户端代码(例如在3个控制台上同时运行相同的代码),我可以获得更多的RPS,如80k RPS。
这是我的客户端代码:
package main
import (
"fmt"
"net/http"
"os"
"sync"
"time"
)
func main() {
requestURL := fmt.Sprintf("http://localhost:%d/home", 3333)
var wg sync.WaitGroup
wg.Add(4)
req, err := http.NewRequest(http.MethodGet, requestURL, nil)
if err != nil {
fmt.Printf("client: could not create request: %s\n", err)
os.Exit(1)
}
for i := 0; i < 4; i++ {
go func() {
defer wg.Done()
client := http.Client{
Timeout: 30 * time.Second,
}
for {
client.Do(req)
}
}()
}
wg.Wait()
}
这是我的服务器端代码:
package main
import (
"errors"
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"log"
"net/http"
"os"
"sync"
)
// 日志处理
func openLogFile(path string) (*os.File, error) {
logFile, err := os.OpenFile(path, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
if err != nil {
return nil, err
}
return logFile, nil
}
// 指标计数器变量
var okStatusCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "ok_request_count",
Help: "Number of 200",
},
)
func listener(serverLog *log.Logger) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
// 指标计数
okStatusCounter.Inc()
w.WriteHeader(http.StatusOK)
}
}
func main() {
// 注册指标
prometheus.MustRegister(okStatusCounter)
// 日志处理
fileSimpleServerLog, err := openLogFile("simpleServer/simpleServerLog.log")
if err != nil {
log.Fatal(err)
}
serverLog := log.New(fileSimpleServerLog, "[simple server]", log.LstdFlags|log.Lshortfile|log.Lmicroseconds)
var wg sync.WaitGroup
wg.Add(1)
// 服务器
go func() {
defer wg.Done()
mux := http.NewServeMux()
mux.HandleFunc("/home", listener(serverLog))
mux.Handle("/metrics", promhttp.Handler())
server := http.Server{
Addr: fmt.Sprintf(":%d", 3333),
Handler: mux,
}
if err := server.ListenAndServe(); err != nil {
if !errors.Is(err, http.ErrServerClosed) {
serverLog.Printf("error running http server: %s\n", err)
}
}
}()
wg.Wait()
}
起初,我以为Go语言可能会为所有客户端连接使用一个端口,但是当我使用netstat进行检查时,发现它使用了多个端口。我尝试搜索,但没有找到合适的答案。
我尝试了使用sync.Mutex:
var mu sync.Mutex
...
for i := 0; i < 1000; i++ {
go func() {
defer wg.Done()
client := http.Client{
//Timeout: 30 * time.Second,
}
for {
mu.Lock()
_, err := client.Do(req)
if err != nil {
clientLog.Printf("client: error making http request: %s\n", err)
os.Exit(1)
}
mu.Unlock()
}
}()
}
wg.Wait()
...
通过上述更改,我可以获得13k的RPS,并且我的CPU负载正常,但这仍然远远不够。
英文:
I'm trying to write a multi thread client to test my server. when I use 2 goroutines everything is fine and I get 50k RPS and my CPU load is normal but when I create more than 2 the RPS drop down to 3K however my CPU load exceed. although when i run my client code multi times (for example run same code on 3 consol at same time) i get more RPS like 80k RPS.
here is my client side code
package main
import (
"fmt"
"net/http"
"os"
"sync"
"time"
)
func main() {
requestURL := fmt.Sprintf("http://localhost:%d/home", 3333)
var wg sync.WaitGroup
wg.Add(4)
req, err := http.NewRequest(http.MethodGet, requestURL, nil)
if err != nil {
fmt.Printf("client: could not create request: %s\n", err)
os.Exit(1)
}
for i := 0; i < 4; i++ {
go func() {
defer wg.Done()
client := http.Client{
Timeout: 30 * time.Second,
}
for {
client.Do(req)
}
}()
}
wg.Wait()
}
and here is my server side code
package main
import (
"errors"
"fmt"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"log"
"net/http"
"os"
"sync"
)
// log handling
func openLogFile(path string) (*os.File, error) {
logFile, err := os.OpenFile(path, os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0644)
if err != nil {
return nil, err
}
return logFile, nil
}
// variables of counter in metric
var okStatusCounter = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "ok_request_count",
Help: "Number of 200",
},
)
func listener(serverLog *log.Logger) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
//metric
okStatusCounter.Inc()
w.WriteHeader(http.StatusOK)
}
}
func main() {
//metric
prometheus.MustRegister(okStatusCounter)
//log handling
fileSimpleServerLog, err := openLogFile("simpleServer/simpleServerLog.log")
if err != nil {
log.Fatal(err)
}
serverLog := log.New(fileSimpleServerLog, "[simple server]", log.LstdFlags|log.Lshortfile|log.Lmicroseconds)
var wg sync.WaitGroup
wg.Add(1)
//server:
go func() {
defer wg.Done()
mux := http.NewServeMux()
mux.HandleFunc("/home", listener(serverLog))
mux.Handle("/metrics", promhttp.Handler())
server := http.Server{
Addr: fmt.Sprintf(":%d", 3333),
Handler: mux,
}
if err := server.ListenAndServe(); err != nil {
if !errors.Is(err, http.ErrServerClosed) {
serverLog.Printf("error running http server: %s\n", err)
}
}
}()
wg.Wait()
}
at first I thought go may use one port for all of client connection but when I checked it out with netstat it was using multiple ports. I tried to search but I couldn't find any proper answer
I tried sync.Mutex :
var mu sync.Mutex
...
for i := 0; i < 1000; i++ {
go func() {
defer wg.Done()
client := http.Client{
//Timeout: 30 * time.Second,
}
for {
mu.Lock()
_, err := client.Do(req)
if err != nil {
clientLog.Printf("client: error making http request: %s\n", err)
os.Exit(1)
}
mu.Unlock()
}
}()
}
wg.Wait()
...
with change above I get 13k RPS and my CPU load is normal but that's not near enough at all
答案1
得分: 1
由于您只向一个主机发送请求,HTTP传输的默认值对您来说不太适用。在您的情况下,最好手动设置参数:
t := http.DefaultTransport.(*http.Transport).Clone()
t.MaxIdleConns = 100
t.MaxConnsPerHost = 100
t.MaxIdleConnsPerHost = 100
httpClient = &http.Client{
Timeout: 10 * time.Second,
Transport: t,
}
更多信息,请阅读这里。
英文:
Since you send requests to only one host, the default values of HTTP Transport are not suitable for you. It is better to set the parameters manually in your case:
t := http.DefaultTransport.(*http.Transport).Clone()
t.MaxIdleConns = 100
t.MaxConnsPerHost = 100
t.MaxIdleConnsPerHost = 100
httpClient = &http.Client{
Timeout: 10 * time.Second,
Transport: t,
}
For more information, you can read here.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论