英文:
golang http request error panic recover
问题
我对使用golang进行恐慌/恢复过程处理不太熟悉,但是我可以帮你翻译代码和注释。以下是翻译好的代码:
package main
import (
"fmt"
"net/http"
)
var urls = []string{
"http://www.google.com", //好的URL,返回200
"http://www.googlegoogle.com/", //坏的URL
"http://www.zoogle.com", //返回500错误的示例
}
// 并发的HTTP请求 -------------------------------------------
func MakeRequest(url string, ch chan<- string) {
resp, err := http.Get(url)
if err != nil {
fmt.Println("错误触发", err)
ch <- fmt.Sprintf("err: %s", err)
}
ch <- fmt.Sprintf("url: %s, 状态: %s ", url, resp.Status) // 将响应放入通道中
resp.Body.Close()
}
func main() {
output := make([][]string, 0) // 定义一个数组来保存响应
// 恐慌恢复------------------------------
defer func() { // 捕获或最终处理
if r := recover(); r != nil { // 捕获恐慌
fmt.Println("恢复触发:", r)
}
}()
// 发起URL请求----------------------------------------------
for _, url := range urls {
ch := make(chan string) // 为每个请求创建一个通道
go MakeRequest(url, ch) // 发起并发的HTTP请求
output = append(output, []string{<-ch}) // 将输出追加到数组中
}
// 打印输出 ----------------------
for _, value := range output {
fmt.Println(value)
}
}
希望这可以帮助到你。如果你有任何其他问题,请随时问我。
英文:
I am fairly new to coding in golang and am struggling with the panic/recover process for a bad url request. Below is a script which queries a list of URLs and outputs responses. Occasionally a bad url is entered or a server is down and the HTTP request fails which causes a panic. I am not clear on how to recover from this and continue. I want the program to recover from the panic, document the bad url and error, and continue down the list of urls outputting the failed url and error with the rest of the normal url response data.
package main
import (
"fmt"
"net/http"
)
var urls = []string{
"http://www.google.com", //good url, 200
"http://www.googlegoogle.com/", //bad url
"http://www.zoogle.com", //500 example
}
//CONCURRENT HTTP REQUESTS -------------------------------------------
func MakeRequest(url string, ch chan<- string) {
resp, err := http.Get(url)
if err != nil {
fmt.Println("Error Triggered", err)
ch <- fmt.Sprintf("err: %s", err)
}
ch <- fmt.Sprintf("url: %s, status: %s ", url, resp.Status) // put response into a channel
resp.Body.Close()
}
func main() {
output := make([][]string, 0) //define an array to hold responses
//PANIC RECOVER------------------------------
defer func() { //catch or finally
if r := recover(); r != nil { //catch
fmt.Println("Recover Triggered: ", r)
}
}()
//MAKE URL REQUESTS----------------------------------------------
for _, url := range urls {
ch := make(chan string) //create a channel for each request
go MakeRequest(url, ch) //make concurrent http request
output = append(output, []string{<-ch}) //append output to an array
}
//PRINT OUTPUT ----------------------
for _, value := range output {
fmt.Println(value)
}
}
I am looking for an output similar to:
[url: http://www.google.com, status: 200 OK ] [url: http://www.googlegoogle.com, err: no such host] [url: http://www.zoogle.com, status: 500 Internal Server Error ]答案1
得分: 3
谢谢Jim B。我以为恐慌是由请求触发的,但实际上是由于尝试使用"resp.Status"来处理失败的请求,因为它不存在。我修改了错误处理,只有在没有错误时才将resp.Status放入"ch"中。在出现错误的情况下,我用错误值替换了"ch"中的不同响应。不需要恢复,因为没有触发恐慌。
func MakeRequest(url string, ch chan<- string) {
resp, err := http.Get(url)
if err != nil {
ch <- fmt.Sprintf("url: %s, err: %s ", url, err)
} else {
ch <- fmt.Sprintf("url: %s, status: %s ", url, resp.Status) // 将响应放入通道
defer resp.Body.Close()
}
}
现在的输出是:
[url: http://www.google.com, status: 200 OK ] [url: http://www.googlegoogle.com/, err: Get http://www.googlegoogle.com/: dial tcp: lookup www.googlegoogle.com: no such host ] [url: http://www.zoogle.com, status: 500 Internal Server Error ]英文:
Thanks Jim B. I assumed the panic was triggered by the request, but it was the attempt to use "resp.Status" for a failed request, since it doesn't exist. I modified my error handling to only put a resp.Status in the "ch" if there is no error. In the case of an error, I substitute a different response into the "ch" with the error value. No need to recover since no panic was triggered.
func MakeRequest(url string, ch chan<- string) {
resp, err := http.Get(url)
if err != nil {
ch <- fmt.Sprintf("url: %s, err: %s ", url, err)
} else {
ch <- fmt.Sprintf("url: %s, status: %s ", url, resp.Status) // put response into a channel
defer resp.Body.Close()
}
}
Output is now:
[url: http://www.google.com, status: 200 OK ] [url: http://www.googlegoogle.com/, err: Get http://www.googlegoogle.com/: dial tcp: lookup www.googlegoogle.com: no such host ] [url: http://www.zoogle.com, status: 500 Internal Server Error ]答案2
得分: 0
我唯一会(也确实会)使用recover的地方是在一个“故障屏障”中。
“故障屏障”是集中捕获问题的最高可用位置。通常情况下,它是一个新的goroutine被生成的地方(例如:每个http接受)。在ServeHTTP方法中,您可能希望捕获和记录单个panic,而不重新启动服务器(通常这些panic是微不足道的nil指针解引用)。您可能会在无法以其他方式检查所需条件的位置看到recover的使用,比如检查文件句柄是否已关闭(只需关闭并处理可能的panic)。
我有一个大型代码库,只使用了2或3次recover。它们只用于上述情况,并且我只是为了确保问题被特别记录而这样做。即使我仍然打算os.Exit并让启动我的脚本重新启动我,我也会使用recover来记录消息。
英文:
The only place I would (and do) place recovers: in a "fault barrier".
A "fault barrier" is the highest available place to centrally catch problems. It is generally the place where a new goroutine got spawned (ie: per http accept). In a ServeHTTP method, you may want to catch and log individual panics without restarting the server (usually such panics are trivial nil ptr derefs). You might see recovers in places where you cannot otherwise check for a condition you need to know - like whether a filehandle is closed already. (just close and handle a possible panic).
I have a large code base that only used recover 2 or 3 times. They are only for the cases mentioned, and I only do it to ensure that the problem is specifically logged. I would do a recover just to log the message even if I was still going to os.Exit and have the script that launched me restart me.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论