Golang:防止连接被对等方重置的策略

huangapple go评论87阅读模式
英文:

golang: strategies to prevent connection reset by peer errors

问题

该程序同时生成了许多goroutine(getStock),我认为这导致远程服务器立即断开连接。我并不想创建DOS攻击,但我仍然希望积极获取数据而不出现“连接重置”错误。

有哪些策略可以最多只有N(例如20)个同时连接?golang Http客户端中是否有内置的GET请求队列?我还在学习go,如果有更好的设计模式来处理这种类型的代码,那将是很好的。

输出

$ go run s1w.go 
sl(size): 1280
body: "AAPL",17.92
body: "GOOG",32.13
body: "FB",42.02
body: "AMZN",195.83
body: "GOOG",32.13
body: "AMZN",195.83
body: "GOOG",32.13
body: "FB",42.02
body: "AAPL",17.92
2017/07/26 00:01:23 NFLX: Get http://goanuj.freeshell.org/go/NFLX.txt: read tcp 192.168.86.28:56674->205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 AAPL: Get http://goanuj.freeshell.org/go/AAPL.txt: read tcp 192.168.86.28:56574->205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 NFLX: Get http://goanuj.freeshell.org/go/NFLX.txt: read tcp 192.168.86.28:56760->205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 FB: Get http://goanuj.freeshell.org/go/FB.txt: read tcp 192.168.86.28:56688->205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 AMZN: Get http://goanuj.freeshell.org/go/AMZN.txt: read tcp 192.168.86.28:56689->205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 AAPL: Get http://goanuj.freeshell.org/go/AAPL.txt: read tcp 192.168.86.28:56702->205.166.94.30:80: read: connection reset by peer

s1.go

package main
import (
        "fmt"
        "io/ioutil"
        "log"
        "net/http"
        "time"
)

// https://www.youtube.com/watch?v=f6kdp27TYZs (15m)
// Generator: function that returns a channel
func getStocks(sl []string) <-chan string {
        c := make(chan string)
        for _, s := range sl {
                go getStock(s, c)
        }
        return c
}

func getStock(s string, c chan string) {
        resp, err := http.Get("http://goanuj.freeshell.org/go/" + s + ".txt")
        if err != nil {
                log.Printf(s + ": " + err.Error())
                c <- err.Error() // channel send
                return
        }
        body, _ := ioutil.ReadAll(resp.Body)
        resp.Body.Close() // close ASAP to prevent too many open file desriptors
        val := string(body)
        //fmt.Printf("body: %s", val)
        c <- val // channel send
        return
}

func main() {
        start := time.Now()
        var sl = []string{"AAPL", "AMZN", "GOOG", "FB", "NFLX"}
        // creates slice of 1280 elements
        for i := 0; i < 8; i++ {
                sl = append(sl, sl...)
        }
        fmt.Printf("sl(size): %d\n", len(sl))

        // get channel that returns only strings
        c := getStocks(sl)
        for i := 0; i < len(sl); i++ {
                fmt.Printf("%s", <-c) // channel recv
        }

        fmt.Printf("main: %.2fs elapsed.\n", time.Since(start).Seconds())
}
英文:

The program is spawning many goroutines (getStock) simultaneously which, I believe, is resulting in the remote server immediately dropping the connection. I am not trying to create a DOS, but I still want to aggressively get data without getting 'connection reset' errors.

What are some strategies to only have at most N (eg. 20) simultaneous connections? Is there a built-in queue for GET requests in the golang Http client? I'm still learning go it would be great to understand if there are better design patterns for this type of code.

Output

$ go run s1w.go 
sl(size): 1280
body: &quot;AAPL&quot;,17.92
body: &quot;GOOG&quot;,32.13
body: &quot;FB&quot;,42.02
body: &quot;AMZN&quot;,195.83
body: &quot;GOOG&quot;,32.13
body: &quot;AMZN&quot;,195.83
body: &quot;GOOG&quot;,32.13
body: &quot;FB&quot;,42.02
body: &quot;AAPL&quot;,17.92
2017/07/26 00:01:23 NFLX: Get http://goanuj.freeshell.org/go/NFLX.txt: read tcp 192.168.86.28:56674-&gt;205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 AAPL: Get http://goanuj.freeshell.org/go/AAPL.txt: read tcp 192.168.86.28:56574-&gt;205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 NFLX: Get http://goanuj.freeshell.org/go/NFLX.txt: read tcp 192.168.86.28:56760-&gt;205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 FB: Get http://goanuj.freeshell.org/go/FB.txt: read tcp 192.168.86.28:56688-&gt;205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 AMZN: Get http://goanuj.freeshell.org/go/AMZN.txt: read tcp 192.168.86.28:56689-&gt;205.166.94.30:80: read: connection reset by peer
2017/07/26 00:01:23 AAPL: Get http://goanuj.freeshell.org/go/AAPL.txt: read tcp 192.168.86.28:56702-&gt;205.166.94.30:80: read: connection reset by peer

s1.go

package main
import (
        &quot;fmt&quot;
        &quot;io/ioutil&quot;
        &quot;log&quot;
        &quot;net/http&quot;
        &quot;time&quot;
)

// https://www.youtube.com/watch?v=f6kdp27TYZs (15m)
// Generator: function that returns a channel
func getStocks(sl []string) &lt;-chan string {
        c := make(chan string)
        for _, s := range sl {
                go getStock(s, c)
        }
        return c
}

func getStock(s string, c chan string) {
        resp, err := http.Get(&quot;http://goanuj.freeshell.org/go/&quot; + s + &quot;.txt&quot;)
        if err != nil {
                log.Printf(s + &quot;: &quot; + err.Error())
                c &lt;- err.Error() // channel send
                return
        }
        body, _ := ioutil.ReadAll(resp.Body)
        resp.Body.Close() // close ASAP to prevent too many open file desriptors
        val := string(body)
        //fmt.Printf(&quot;body: %s&quot;, val)
        c &lt;- val // channel send
        return
}

func main() {
        start := time.Now()
        var sl = []string{&quot;AAPL&quot;, &quot;AMZN&quot;, &quot;GOOG&quot;, &quot;FB&quot;, &quot;NFLX&quot;}
        // creates slice of 1280 elements
        for i := 0; i &lt; 8; i++ {
                sl = append(sl, sl...)
        }
        fmt.Printf(&quot;sl(size): %d\n&quot;, len(sl))

        // get channel that returns only strings
        c := getStocks(sl)
        for i := 0; i &lt; len(sl); i++ {
                fmt.Printf(&quot;%s&quot;, &lt;-c) // channel recv
        }

        fmt.Printf(&quot;main: %.2fs elapsed.\n&quot;, time.Since(start).Seconds())
}

答案1

得分: 2

不要为每个请求启动新的goroutine,而是在程序启动时创建一个固定的goroutine池,并通过共享通道传递订单。每个订单都将是一个与当前传递给getStock的参数相对应的结构体。如果您需要能够终止该池,情况会变得更加复杂,但仍然不是很难...

基本上,您的新处理程序将是一个循环,从所有处理程序共享的通道中读取订单,执行它,然后将结果发送到订单的响应通道上。

英文:

Instead of spinning up new goroutines for every request, create a fixed pool when your program starts and pass in orders over a shared channel. Each order would be a struct corresponding to the parameters currently passed to getStock. Things get more complicated if you need to be able to kill the pool, but it still isn't that hard...

Basically your new handler would be a loop, reading an order from a channel shared by all handlers, executing it, then sending the result on the order's response channel.

答案2

得分: 0

你需要使用缓冲通道来限制并行操作的数量。在循环中启动新的goroutine之前,你需要将其发送到该通道,并在调用done后从该通道接收,这样它将释放一个位置,新的请求将能够开始。请查看你修改后的代码在playground上的链接

英文:

You need to use a buffered channel to limit amount of parallel operations. Before starting a new goroutine in the loop you need to send into this channel and receive from it after call done, so it will release one place and new request will be able to start. Check out yours modified code on playground

huangapple
  • 本文由 发表于 2017年7月27日 00:05:30
  • 转载请务必保留本文链接:https://go.coder-hub.com/45332170.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定