英文:
Timeout in goroutines and Http requests
问题
我正在检查服务器的状态。服务器的休眠时间超过15秒,我正在检查超时情况。
package main
import (
"fmt"
"net/http"
"time"
)
var urls = []string{
"http://site-centos-64:8080/examples/abc1.jsp",
}
type HttpResponse struct {
url string
response *http.Response
err error
}
var ch = make(chan *HttpResponse, 100) // buffered
var count int
func asyncHttpGets(urls []string) []*HttpResponse {
responses := []*HttpResponse{}
count := 0
timeout := make(chan bool, 100)
for i := 0; i < 500; i++ {
go func() {
for _, url := range urls {
resp, err := http.Get(url)
count++
go func() {
time.Sleep(1 * time.Second)
timeout <- true
}()
ch <- &HttpResponse{url, resp, err}
if err != nil {
return
}
resp.Body.Close()
}
}()
}
for {
select {
case r := <-ch:
responses = append(responses, r)
if count == 500 {
return responses
}
case <-timeout:
fmt.Println("超时")
if count == 500 {
return responses
}
}
}
return responses
}
func main() {
now := time.Now()
results := asyncHttpGets(urls)
for _, result := range results {
fmt.Printf("%s 状态: %s\n", result.url, result.response.Status)
}
fmt.Println(time.Since(now))
}
但是实际情况是,最初会打印出"超时",但是最后150-200个请求显示为"200 OK"的状态,这是不应该的。而且当尝试执行1000次时,会显示"panic: runtime error: invalid memory address or nil pointer dereference"。
英文:
I am checking the status of a server. The server has a sleep of more than 15 seconds and I am checking for the timeout.
package main
import (
"fmt"
"net/http"
"time"
)
var urls = []string{
"http://site-centos-64:8080/examples/abc1.jsp",
}
type HttpResponse struct {
url string
response *http.Response
err error
}
var ch = make(chan *HttpResponse, 100) // buffered
var count int
func asyncHttpGets(urls []string) []*HttpResponse {
responses := []*HttpResponse{}
count:=0
timeout := make(chan bool, 100)
for i:=0;i<500;i++{
go func(){
for _, url := range urls {
resp, err := http.Get(url)
count++;
go func() {
time.Sleep(1 * time.Second)
timeout <- true
}()
ch <- &HttpResponse{url, resp, err}
if err != nil {
return
}
resp.Body.Close()
}
}()
}
for {
select {
case r := <-ch:
responses = append(responses, r)
if count == 500 {
return responses
}
case <-timeout:
fmt.Println("Timed Out")
if count == 500 {
return responses
}
}
}
return responses
}
func main() {
now:=time.Now()
results := asyncHttpGets(urls)
for _, result := range results {
fmt.Printf("%s status: %s\n", result.url,result.response.Status)
}
fmt.Println( time.Since(now))
}
But what is happening is that initially it prints a "Timed Out" but the last 150-200 requests show the "200 OK" status which it should not. Also when trying to do it for a 1000 times it shows "panic: runtime error: invalid memory address or nil pointer dereference"
答案1
得分: 2
你在发送请求之前执行了resp, err := http.Get(url)
,这会导致一切都被阻塞,直到响应准备好,然后同时在两个通道上发送。
只需将启动超时 goroutine 的代码移到发送请求之前的那一行,问题就会解决。即:
for _, url := range urls {
go func() {
time.Sleep(1 * time.Second)
timeout <- true
count++
}()
resp, err := http.Get(url)
count++ //我想这是你的意思,对吗?
ch <- &HttpResponse{url, resp, err}
if err != nil {
return
}
resp.Body.Close()
}
顺便说一下,尝试使用原子递增来计数,也许可以使用一个 waitgroup,以及使用 time.After
通道来替代 sleep。
英文:
You are doing the resp, err := http.Get(url)
before you initiate the timeout goroutine.
This will cause everything to block until the response is ready, then send on both channels simultaneously.
Just move starting the timeout goroutine to the line before sending the request, and it will be fine. i.e.:
for _, url := range urls {
go func() {
time.Sleep(1 * time.Second)
timeout <- true
count++;
}()
resp, err := http.Get(url)
count++; //I think this is what you meant, right?
ch <- &HttpResponse{url, resp, err}
if err != nil {
return
}
resp.Body.Close()
}
BTW try to use atomic increments to count, and maybe use a waitgroup, and a time.After
channel instead of sleep.
答案2
得分: 0
如果您想避免将并发逻辑与业务逻辑混合在一起,我写了这个库https://github.com/shomali11/parallelizer来帮助您解决这个问题。它封装了并发逻辑,因此您不必担心它。
所以在您的示例中:
package main
import (
"github.com/shomali11/parallelizer"
"fmt"
)
func main() {
urls := []string{ ... }
results = make([]*HttpResponse, len(urls)
options := &Options{ Timeout: time.Second }
group := parallelizer.NewGroup(options)
for index, url := range urls {
group.Add(func(index int, url string, results *[]*HttpResponse) {
return func () {
...
results[index] = &HttpResponse{url, response, err}
}
}(index, url, &results))
}
err := group.Run()
fmt.Println("Done")
fmt.Println(fmt.Sprintf("Results: %v", results))
fmt.Printf("Error: %v", err) // nil if it completed, err if timed out
}
英文:
If you would like to avoid mixing concurrency logic with business logic, I wrote this library https://github.com/shomali11/parallelizer to help you with that. It encapsulates the concurrency logic so you do not have to worry about it.
So in your example:
package main
import (
"github.com/shomali11/parallelizer"
"fmt"
)
func main() {
urls := []string{ ... }
results = make([]*HttpResponse, len(urls)
options := &Options{ Timeout: time.Second }
group := parallelizer.NewGroup(options)
for index, url := range urls {
group.Add(func(index int, url string, results *[]*HttpResponse) {
return func () {
...
results[index] = &HttpResponse{url, response, err}
}
}(index, url, &results))
}
err := group.Run()
fmt.Println("Done")
fmt.Println(fmt.Sprintf("Results: %v", results))
fmt.Printf("Error: %v", err) // nil if it completed, err if timed out
}
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论