Go网络爬虫卡住了。

huangapple go评论107阅读模式
英文:

Go web crawler gets stuck

问题

我是你的中文翻译助手,以下是你提供的代码的翻译:

  1. 我是Go的新手正在尝试实现一个网络爬虫它应该异步解析网页并将其内容保存到文件中每个新页面保存为一个文件但是在我添加了以下代码后它卡住了
  2. u, _ := url.Parse(uri)
  3. fileName := u.Host + u.RawQuery + ".html"
  4. body, err := ioutil.ReadAll(resp.Body)
  5. writes <- writer{fileName: fileName, body: body}
  6. 有人可以帮我解决这个问题吗基本上我想从响应体中获取数据将其推送到通道中然后从通道中获取数据并将其放入文件中
  7. 看起来`writes`通道没有被初始化向一个空通道发送数据会导致永远阻塞
  8. package main
  9. import (
  10. "crypto/tls"
  11. "flag"
  12. "fmt"
  13. "io/ioutil"
  14. "net/http"
  15. "net/url"
  16. "os"
  17. "runtime"
  18. "./linksCollector"
  19. )
  20. type writer struct {
  21. fileName string
  22. body []byte
  23. }
  24. var writes = make(chan writer)
  25. func usage() {
  26. fmt.Fprintf(os.Stderr, "usage: crawl http://example.com/")
  27. flag.PrintDefaults()
  28. os.Exit(2)
  29. }
  30. func check(e error) {
  31. if e != nil {
  32. panic(e)
  33. }
  34. }
  35. func main() {
  36. runtime.GOMAXPROCS(8)
  37. flag.Usage = usage
  38. flag.Parse()
  39. args := flag.Args()
  40. fmt.Println(args)
  41. if len(args) < 1 {
  42. usage()
  43. fmt.Println("Please specify start page")
  44. os.Exit(1)
  45. }
  46. queue := make(chan string)
  47. filteredQueue := make(chan string)
  48. go func() { queue <- args[0] }()
  49. go filterQueue(queue, filteredQueue)
  50. for uri := range filteredQueue {
  51. go enqueue(uri, queue)
  52. }
  53. for {
  54. select {
  55. case data := <-writes:
  56. f, err := os.Create(data.fileName)
  57. check(err)
  58. defer f.Close()
  59. _, err = f.Write(data.body)
  60. check(err)
  61. }
  62. }
  63. }
  64. func filterQueue(in chan string, out chan string) {
  65. var seen = make(map[string]bool)
  66. for val := range in {
  67. if !seen[val] {
  68. seen[val] = true
  69. out <- val
  70. }
  71. }
  72. }
  73. func enqueue(uri string, queue chan string) {
  74. fmt.Println("fetching", uri)
  75. transport := &http.Transport{
  76. TLSClientConfig: &tls.Config{
  77. InsecureSkipVerify: true,
  78. },
  79. }
  80. client := http.Client{Transport: transport}
  81. resp, err := client.Get(uri)
  82. check(err)
  83. defer resp.Body.Close()
  84. u, _ := url.Parse(uri)
  85. fileName := u.Host + u.RawQuery + ".html";
  86. body, err := ioutil.ReadAll(resp.Body)
  87. writes <- writer{fileName: fileName, body: body}
  88. links := collectlinks.All(resp.Body)
  89. for _, link := range links {
  90. absolute := fixURL(link, uri)
  91. if uri != "" {
  92. go func() { queue <- absolute }()
  93. }
  94. }
  95. }
  96. func fixURL(href, base string) string {
  97. uri, err := url.Parse(href)
  98. if err != nil {
  99. return ""
  100. }
  101. baseURL, err := url.Parse(base)
  102. if err != nil {
  103. return ""
  104. }
  105. uri = baseURL.ResolveReference(uri)
  106. return uri.String()
  107. }

希望对你有帮助!如果你有任何其他问题,请随时问我。

英文:

I'm new to Go and trying to implement a web crawler. It should asynchronously parse web pages and save their contents to files, one file per new page. But it gets stuck after I've added

  1. u, _ := url.Parse(uri)
  2. fileName := u.Host + u.RawQuery + &quot;.html&quot;
  3. body, err := ioutil.ReadAll(resp.Body)
  4. writes &lt;- writer{fileName: fileName, body: body}

Can anyone help me fix this problem? Basically I want to get data from the response body, push it to the channel, and then get data from the channel and put it into a file.
It looks like the writes channel was not initialized, and sending on a nil channel blocks forever.

  1. package main
  2. import (
  3. &quot;crypto/tls&quot;
  4. &quot;flag&quot;
  5. &quot;fmt&quot;
  6. &quot;io/ioutil&quot;
  7. &quot;net/http&quot;
  8. &quot;net/url&quot;
  9. &quot;os&quot;
  10. &quot;runtime&quot;
  11. &quot;./linksCollector&quot;
  12. )
  13. type writer struct {
  14. fileName string
  15. body []byte
  16. }
  17. var writes = make(chan writer)
  18. func usage() {
  19. fmt.Fprintf(os.Stderr, &quot;usage: crawl http://example.com/&quot;)
  20. flag.PrintDefaults()
  21. os.Exit(2)
  22. }
  23. func check(e error) {
  24. if e != nil {
  25. panic(e)
  26. }
  27. }
  28. func main() {
  29. runtime.GOMAXPROCS(8)
  30. flag.Usage = usage
  31. flag.Parse()
  32. args := flag.Args()
  33. fmt.Println(args)
  34. if len(args) &lt; 1 {
  35. usage()
  36. fmt.Println(&quot;Please specify start page&quot;)
  37. os.Exit(1)
  38. }
  39. queue := make(chan string)
  40. filteredQueue := make(chan string)
  41. go func() { queue &lt;- args[0] }()
  42. go filterQueue(queue, filteredQueue)
  43. for uri := range filteredQueue {
  44. go enqueue(uri, queue)
  45. }
  46. for {
  47. select {
  48. case data := &lt;-writes:
  49. f, err := os.Create(data.fileName)
  50. check(err)
  51. defer f.Close()
  52. _, err = f.Write(data.body)
  53. check(err)
  54. }
  55. }
  56. }
  57. func filterQueue(in chan string, out chan string) {
  58. var seen = make(map[string]bool)
  59. for val := range in {
  60. if !seen[val] {
  61. seen[val] = true
  62. out &lt;- val
  63. }
  64. }
  65. }
  66. func enqueue(uri string, queue chan string) {
  67. fmt.Println(&quot;fetching&quot;, uri)
  68. transport := &amp;http.Transport{
  69. TLSClientConfig: &amp;tls.Config{
  70. InsecureSkipVerify: true,
  71. },
  72. }
  73. client := http.Client{Transport: transport}
  74. resp, err := client.Get(uri)
  75. check(err)
  76. defer resp.Body.Close()
  77. u, _ := url.Parse(uri)
  78. fileName := u.Host + u.RawQuery + &quot;.html&quot;
  79. body, err := ioutil.ReadAll(resp.Body)
  80. writes &lt;- writer{fileName: fileName, body: body}
  81. links := collectlinks.All(resp.Body)
  82. for _, link := range links {
  83. absolute := fixURL(link, uri)
  84. if uri != &quot;&quot; {
  85. go func() { queue &lt;- absolute }()
  86. }
  87. }
  88. }
  89. func fixURL(href, base string) string {
  90. uri, err := url.Parse(href)
  91. if err != nil {
  92. return &quot;&quot;
  93. }
  94. baseURL, err := url.Parse(base)
  95. if err != nil {
  96. return &quot;&quot;
  97. }
  98. uri = baseURL.ResolveReference(uri)
  99. return uri.String()
  100. }

答案1

得分: 1

你的for循环在select接收数据之前调用了多次go enqueue,导致发送到writes的发送操作使程序崩溃。我认为,我对Go的并发性不是很熟悉。

更新:对于之前的回答,我很抱歉,那是一个对我所知有限的事情的错误解释尝试。经过仔细查看,我几乎可以确定两件事。**1.**你的writes通道不是nil,你可以依赖make来初始化你的通道。**2.**对通道的range循环将会阻塞,直到该通道关闭。所以你的

  1. for uri := range filteredQueue {
  2. go enqueue(uri, queue)
  3. }

是阻塞的,因此你的程序永远不会到达select,因此无法从writes通道接收数据。你可以通过在新的goroutine中执行range循环来避免这个问题。

  1. go func() {
  2. for uri := range filteredQueue {
  3. go enqueue(uri, queue)
  4. }
  5. }()

你的程序目前仍然会因为其他原因而出错,但你可以通过使用sync.WaitGroup进行一些同步来修复这个问题。这里有一个简化的示例:https://play.golang.org/p/o2Oj4g8c2y。

英文:

<strike>Your for loop ends up calling go enqueue more than once before the select receives the data causing the send to writes to crash the program, I think, I'm not really that familiar with Go's concurrency.</strike>

Update: I'm sorry for the previous answer, it was a poorly informed attempt at explaining something I have only limited knowledge about. After taking a closer look I am almost certain of two things. 1. Your writes channel is not nil, you can rely on make to initilize your channels. 2. A range loop over a channel will block until that channel is closed. So your

  1. for uri := range filteredQueue {
  2. go enqueue(uri, queue)
  3. }

is blocking, therefore your program never reaches the select and so is unable to receive from the writes channel. You can avoid this by executing the range loop in a new goroutine.

  1. go func() {
  2. for uri := range filteredQueue {
  3. go enqueue(uri, queue)
  4. }
  5. }()

Your program, as is, will still break for other reasons but you should be able to fix that with a little synchronization using a sync.WaitGroup.
Here's a simplified example: https://play.golang.org/p/o2Oj4g8c2y.

huangapple
  • 本文由 发表于 2017年3月29日 18:29:34
  • 转载请务必保留本文链接:https://go.coder-hub.com/43091005.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定