Scrapy仅爬取站点的前5页。

huangapple go评论103阅读模式
英文:

Scrapy Crawl only first 5 pages of the site

问题

I am working on the solution to the following problem, My boss wants from me to create a CrawlSpider in Scrapy to scrape the article details like title, description and paginate only the first 5 pages.

我正在解决以下问题,我的老板希望我创建一个Scrapy中的CrawlSpider来抓取文章的详细信息,如标题描述,并且只翻页前5页。

I created a CrawlSpider but it is paginating from all the pages, How can I restrict the CrawlSpider to paginate only the first latest 5 pages?

我创建了一个CrawlSpider,但它正在从所有页面翻页,如何限制CrawlSpider只翻页最新的前5页?

The site article listing page markup that opens when we click on pagination next link:

网站文章列表页的标记,当我们点击翻页的下一页链接时打开:

Listing page markup:

列表页面标记

  1. <div class="list">
  2. <div class="snippet-content">
  3. <h2>
  4. <a href="https://example.com/article-1">Article 1</a>
  5. </h2>
  6. </div>
  7. <div class="snippet-content">
  8. <h2>
  9. <a href="https://example.com/article-2">Article 2</a>
  10. </h2>
  11. </div>
  12. <div class="snippet-content">
  13. <h2>
  14. <a href="https://example.com/article-3">Article 3</a>
  15. </h2>
  16. </div>
  17. <div class="snippet-content">
  18. <h2>
  19. <a href="https://example.com/article-4">Article 4</a>
  20. </h2>
  21. </div>
  22. </div>
  23. <ul class="pagination">
  24. <li class="next">
  25. <a href="https://www.example.com?page=2&keywords=&from=&topic=&year=&type="> Next </a>
  26. </li>
  27. </ul>

For this, I am using Rule object with restrict_xpaths argument to get all the article links, and for the follow I am executing parse_item class method that will get the article title and description from the meta tags.

为此,我使用Rule对象和restrict_xpaths参数来获取所有文章链接,然后在跟踪时执行parse_item类方法,该方法将从meta标签中获取文章的标题描述

Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item",
follow=True)

Detail page markup:

详细页面标记:

  1. <meta property="og:title" content="Article Title">
  2. <meta property="og:description" content="Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularized in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.">

After this, I have added another Rule object to handle pagination CrawlSpider will use the following link to open other listing page and do the same procedure again and again.

在此之后,我添加了另一个Rule对象来处理翻页,CrawlSpider将使用以下链接打开其他列表页面并一遍又一遍地执行相同的过程。

Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination&quot]/li[@class="next&quot]/a'))

This is my CrawlSpider code:

这是我的CrawlSpider代码:

from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import w3lib.html

class ExampleSpider(CrawlSpider):
name = "example"
allowed_domains = ["example.com"]
start_urls = ["https://www.example.com/"]
custom_settings = {
'FEED_URI': 'articles.json',
'FEED_FORMAT': 'json'
}
total = 0

  1. rules = (
  2. # Get the list of all articles on the one page and follow these links
  3. Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item",
  4. follow=True),
  5. # After that get pagination next link get href and follow it, repeat the cycle
  6. Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination&quot]/li[@class="next&quot]/a'))
  7. )
  8. def parse_item(self, response):
  9. self.total = self.total + 1
  10. title = response.xpath('//meta[@property="og:title"]/@content').get() or ""
  11. description = w3lib.html.remove_tags(response.xpath('//meta[@property="og:description&quot]/@content').get()) or ""
  12. return {
  13. 'id': self.total,
  14. 'title': title,
  15. 'description': description
  16. }

Is there a way we can restrict the crawler to crawl only the first 5 pages?

有办法限制爬虫只翻页前5页吗?

英文:

I am working on the solution to the following problem, My boss wants from me to create a CrawlSpider in Scrapy to scrape the article details like title, description and paginate only the first 5 pages.

I created a CrawlSpider but it is paginating from all the pages, How can I restrict the CrawlSpider to paginate only the first latest 5 pages?

The site article listing page markup that opens when we click on pagination next link:

Listing page markup:

  1. <div class="list">
  2. <div class="snippet-content">
  3. <h2>
  4. <a href="https://example.com/article-1">Article 1</a>
  5. </h2>
  6. </div>
  7. <div class="snippet-content">
  8. <h2>
  9. <a href="https://example.com/article-2">Article 2</a>
  10. </h2>
  11. </div>
  12. <div class="snippet-content">
  13. <h2>
  14. <a href="https://example.com/article-3">Article 3</a>
  15. </h2>
  16. </div>
  17. <div class="snippet-content">
  18. <h2>
  19. <a href="https://example.com/article-4">Article 4</a>
  20. </h2>
  21. </div>
  22. </div>
  23. <ul class="pagination">
  24. <li class="next">
  25. <a href="https://www.example.com?page=2&keywords=&from=&topic=&year=&type="> Next </a>
  26. </li>
  27. </ul>

For this, I am using Rule object with restrict_xpaths argument to get all the article links, and for the follow I am executing parse_item class method that will get the article title and description from the meta tags.

  1. Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item",
  2. follow=True)

Detail page markup:

  1. <meta property="og:title" content="Article Title">
  2. <meta property="og:description" content="Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.">

After this, I have added another Rule object to handle pagination CrawlSpider will use the following link to open other listing page and do the same procedure again and again.

  1. Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination"]/li[@class="next"]/a'))

This is my CrawlSpider code:

  1. from scrapy.linkextractors import LinkExtractor
  2. from scrapy.spiders import CrawlSpider, Rule
  3. import w3lib.html
  4. class ExampleSpider(CrawlSpider):
  5. name = "example"
  6. allowed_domains = ["example.com"]
  7. start_urls = ["https://www.example.com/"]
  8. custom_settings = {
  9. 'FEED_URI': 'articles.json',
  10. 'FEED_FORMAT': 'json'
  11. }
  12. total = 0
  13. rules = (
  14. # Get the list of all articles on the one page and follow these links
  15. Rule(LinkExtractor(restrict_xpaths='//div[contains(@class, "snippet-content")]/h2/a'), callback="parse_item",
  16. follow=True),
  17. # After that get pagination next link get href and follow it, repeat the cycle
  18. Rule(LinkExtractor(restrict_xpaths='//ul[@class="pagination"]/li[@class="next"]/a'))
  19. )
  20. def parse_item(self, response):
  21. self.total = self.total + 1
  22. title = response.xpath('//meta[@property="og:title"]/@content').get() or ""
  23. description = w3lib.html.remove_tags(response.xpath('//meta[@property="og:description"]/@content').get()) or ""
  24. return {
  25. 'id': self.total,
  26. 'title': title,
  27. 'description': description
  28. }

Is there a way we can restrict the crawler to crawl only the first 5 pages?

答案1

得分: 2

解决方案 1: 使用 process_request

  1. from scrapy.spiders import CrawlSpider, Rule
  2. from scrapy.linkextractors import LinkExtractor
  3. def limit_requests(request, response):
  4. # 这里我们有页面编号。
  5. # page_number = request.url[-1]
  6. # if int(page_number) >= 6:
  7. # return None
  8. # 这里我们使用一个计数器
  9. if not hasattr(limit_requests, "page_number"):
  10. limit_requests.page_number = 0
  11. limit_requests.page_number += 1
  12. if limit_requests.page_number >= 5:
  13. return None
  14. return request
  15. class ExampleSpider(CrawlSpider):
  16. name = 'example_spider'
  17. start_urls = ['https://scrapingclub.com/exercise/list_basic/']
  18. page = 0
  19. rules = (
  20. # 获取一页上所有文章的列表并跟踪这些链接
  21. Rule(LinkExtractor(restrict_xpaths='//div[@class="card-body"]/h4/a'), callback="parse_item",
  22. follow=True),
  23. # 然后获取分页的下一页链接的 href 并跟踪它,重复这个循环
  24. Rule(LinkExtractor(restrict_xpaths='//li[@class="page-item"][last()]/a'), process_request=limit_requests)
  25. )
  26. total = 0
  27. def parse_item(self, response):
  28. title = response.xpath('//h3//text()').get(default='')
  29. price = response.xpath('//div[@class="card-body"]/h4//text()').get(default='')
  30. self.total = self.total + 1
  31. return {
  32. 'id': self.total,
  33. 'title': title,
  34. 'price': price
  35. }

解决方案 2: 重写 _requests_to_follow 方法(虽然可能较慢)。

  1. from scrapy.http import HtmlResponse
  2. from scrapy.spiders import CrawlSpider, Rule
  3. from scrapy.linkextractors import LinkExtractor
  4. class ExampleSpider(CrawlSpider):
  5. name = 'example_spider'
  6. start_urls = ['https://scrapingclub.com/exercise/list_basic/']
  7. rules = (
  8. # 获取一页上所有文章的列表并跟踪这些链接
  9. Rule(LinkExtractor(restrict_xpaths='//div[@class="card-body&quot]/h4/a'), callback="parse_item",
  10. follow=True),
  11. # 然后获取分页的下一页链接的 href 并跟踪它,重复这个循环
  12. Rule(LinkExtractor(restrict_xpaths='//li[@class="page-item"][last()]/a'))
  13. )
  14. total = 0
  15. page = 0
  16. def _requests_to_follow(self, response):
  17. if not isinstance(response, HtmlResponse):
  18. return
  19. if self.page >= 5: # 停止条件
  20. return
  21. seen = set()
  22. for rule_index, rule in enumerate(self._rules):
  23. links = [
  24. lnk
  25. for lnk in rule.link_extractor.extract_links(response)
  26. if lnk not in seen
  27. ]
  28. for link in rule.process_links(links):
  29. if rule_index == 1: # 假设只有一个“下一页”按钮
  30. self.page += 1
  31. seen add(link)
  32. request = self._build_request(rule_index, link)
  33. yield rule.process_request(request, response)
  34. def parse_item(self, response):
  35. title = response.xpath('//h3//text()').get(default='')
  36. price = response.xpath('//div[@class="card-body&quot]/h4//text()').get(default='')
  37. self.total = self.total + 1
  38. return {
  39. 'id': self.total,
  40. 'title': title,
  41. 'price': price
  42. }

这些解决方案都是比较自解释的,如果您需要添加其他内容,请在评论中提问。

英文:

Solution 1: use process_request.

  1. from scrapy.spiders import CrawlSpider, Rule
  2. from scrapy.linkextractors import LinkExtractor
  3. def limit_requests(request, response):
  4. # here we have the page number.
  5. # page_number = request.url[-1]
  6. # if int(page_number) >= 6:
  7. # return None
  8. # here we use a counter
  9. if not hasattr(limit_requests, "page_number"):
  10. limit_requests.page_number = 0
  11. limit_requests.page_number += 1
  12. if limit_requests.page_number >= 5:
  13. return None
  14. return request
  15. class ExampleSpider(CrawlSpider):
  16. name = 'example_spider'
  17. start_urls = ['https://scrapingclub.com/exercise/list_basic/']
  18. page = 0
  19. rules = (
  20. # Get the list of all articles on the one page and follow these links
  21. Rule(LinkExtractor(restrict_xpaths='//div[@class="card-body"]/h4/a'), callback="parse_item",
  22. follow=True),
  23. # After that get pagination next link get href and follow it, repeat the cycle
  24. Rule(LinkExtractor(restrict_xpaths='//li[@class="page-item"][last()]/a'), process_request=limit_requests)
  25. )
  26. total = 0
  27. def parse_item(self, response):
  28. title = response.xpath('//h3//text()').get(default='')
  29. price = response.xpath('//div[@class="card-body"]/h4//text()').get(default='')
  30. self.total = self.total + 1
  31. return {
  32. 'id': self.total,
  33. 'title': title,
  34. 'price': price
  35. }

Solution 2: overwrite _requests_to_follow method (should be slower though).

  1. from scrapy.http import HtmlResponse
  2. from scrapy.spiders import CrawlSpider, Rule
  3. from scrapy.linkextractors import LinkExtractor
  4. class ExampleSpider(CrawlSpider):
  5. name = 'example_spider'
  6. start_urls = ['https://scrapingclub.com/exercise/list_basic/']
  7. rules = (
  8. # Get the list of all articles on the one page and follow these links
  9. Rule(LinkExtractor(restrict_xpaths='//div[@class="card-body"]/h4/a'), callback="parse_item",
  10. follow=True),
  11. # After that get pagination next link get href and follow it, repeat the cycle
  12. Rule(LinkExtractor(restrict_xpaths='//li[@class="page-item"][last()]/a'))
  13. )
  14. total = 0
  15. page = 0
  16. def _requests_to_follow(self, response):
  17. if not isinstance(response, HtmlResponse):
  18. return
  19. if self.page >= 5: # stopping condition
  20. return
  21. seen = set()
  22. for rule_index, rule in enumerate(self._rules):
  23. links = [
  24. lnk
  25. for lnk in rule.link_extractor.extract_links(response)
  26. if lnk not in seen
  27. ]
  28. for link in rule.process_links(links):
  29. if rule_index == 1: # assuming there's only one "next button"
  30. self.page += 1
  31. seen.add(link)
  32. request = self._build_request(rule_index, link)
  33. yield rule.process_request(request, response)
  34. def parse_item(self, response):
  35. title = response.xpath('//h3//text()').get(default='')
  36. price = response.xpath('//div[@class="card-body"]/h4//text()').get(default='')
  37. self.total = self.total + 1
  38. return {
  39. 'id': self.total,
  40. 'title': title,
  41. 'price': price
  42. }

The solutions are pretty much self explanatory, if you want me to add something please ask in the comments.

huangapple
  • 本文由 发表于 2023年4月11日 16:29:05
  • 转载请务必保留本文链接:https://go.coder-hub.com/75983861.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定