抓取隐藏页面,如果搜索结果多于显示的结果。

huangapple go评论116阅读模式
英文:

scrape hidden pages if search yields more results than displayed

问题

以下是翻译好的部分:

有些在https://www.comparis.ch/carfinder/default下输入的搜索查询会返回超过1,000个结果(动态显示在搜索页面上)。然而,结果页面只显示最多100页,每页10个结果,所以我试图爬取在返回超过1,000个结果的查询下的剩余数据。
用于爬取前100页的ID的代码如下(运行所有100页大约需要2分钟):

  1. from bs4 import BeautifulSoup
  2. import requests
  3. # 由于最大页数限制为100页
  4. number_of_pages = 100
  5. # 初始化一个空字典
  6. car_dict = {}
  7. # 解析每个搜索结果页面并提取每个汽车的ID
  8. for page in range(0, number_of_pages + 1, 1):
  9. newest_secondhand_cars = 'https://www.comparis.ch/carfinder/marktplatz/occasion'
  10. newest_secondhand_cars = requests.get(newest_secondhand_cars + str('?page=') + str(page))
  11. newest_secondhand_cars = newest_secondhand_cars.content
  12. soup = BeautifulSoup(newest_secondhand_cars, "lxml")
  13. for car in list(soup.find('div', {'id': 'cf-result-list'}).find_all('h2')):
  14. car_id = int(car.decode().split('href="')[1].split('">')[0].split('/')[-1])
  15. car_dict[car_id] = {}

所以我尝试只传递一个大于100的str(page),但并没有返回额外的结果。如果有可能,我应该如何访问剩余的结果呢?

英文:

Some of the search queries entered under https://www.comparis.ch/carfinder/default would yield more than 1'000 results (shown dynamically on the search page). The results however only show a max of 100 pages with 10 results each so I'm trying to scrape the remaining data given a query that yields more than 1'000 results.
The code to scrape the IDs of the first 100 pages is (takes approx. 2 minutes to run through all 100 pages):

  1. from bs4 import BeautifulSoup
  2. import requests
  3. # as the max number of pages is limited to 100
  4. number_of_pages = 100
  5. # initiate empty dict
  6. car_dict = {}
  7. # parse every search results page and extract every car ID
  8. for page in range(0, number_of_pages + 1, 1):
  9. newest_secondhand_cars = 'https://www.comparis.ch/carfinder/marktplatz/occasion'
  10. newest_secondhand_cars = requests.get(newest_secondhand_cars + str('?page=') + str(page))
  11. newest_secondhand_cars = newest_secondhand_cars.content
  12. soup = BeautifulSoup(newest_secondhand_cars, "lxml")
  13. for car in list(soup.find('div', {'id': 'cf-result-list'}).find_all('h2')):
  14. car_id = int(car.decode().split('href="')[1].split('">')[0].split('/')[-1])
  15. car_dict[car_id] = {}

So I obviously tried just passing a str(page) greater than 100 which does not yield additional results.
How could I access the remaining results, if at all?

答案1

得分: 1

似乎您的网站在客户浏览时加载数据。可能有多种修复方法。一种选择可能是利用Scrapy Splash

假设您正在使用Scrapy,您可以执行以下操作:

  1. 使用Docker启动Splash服务器 - 记下<ip-address>。
  2. settings.py中添加SPLASH_URL = &lt;splash-server-ip-address&gt;
  3. settings.py中添加到中间件的代码:
  1. DOWNLOADER_MIDDLEWARES = {
  2. 'scrapy_splash.SplashCookiesMiddleware': 723,
  3. 'scrapy_splash.SplashMiddleware': 725,
  4. 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
  5. }
  1. 在您的spider.py中导入from scrapy_splash import SplashRequest
  2. 在您的spider.py中将start_url设置为遍历页面的起始点,例如:
  1. base_url = 'https://www.comparis.ch/carfinder/marktplatz/occasion'
  2. start_urls = [
  3. base_url + str('?page=') + str(page) % page for page in range(0,100)
  4. ]
  1. 通过修改def start_requests(self):将URL重定向到Splash服务器,例如:
  1. def start_requests(self):
  2. for url in self.start_urls:
  3. yield SplashRequest(url, self.parse,
  4. endpoint='render.html',
  5. args={'wait': 0.5},
  6. )
  1. 像您现在一样解析响应。

请告诉我这对您有何帮助。

英文:

It seems that your website loads data when the client is browsing. There are probably a number of ways to fix this. One option could be to utilize Scrapy Splash.

Assuming you use scrapy, you can do the following:

  1. Start a Splash server using docker - make a note of the <ip-address>
  2. In settings.py add SPLASH_URL = &lt;splash-server-ip-address&gt;
  3. In settings.py add to middlewares

this code:

  1. DOWNLOADER_MIDDLEWARES = {
  2. &#39;scrapy_splash.SplashCookiesMiddleware&#39;: 723,
  3. &#39;scrapy_splash.SplashMiddleware&#39;: 725,
  4. &#39;scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware&#39;: 810,
  5. }
  1. Import from scrapy_splash import SplashRequest in your spider.py
  2. Set start_url in your spider.py to iterate over the pages

E.g. like this

  1. base_url = &#39;https://www.comparis.ch/carfinder/marktplatz/occasion&#39;
  2. start_urls = [
  3. base_url + str(&#39;?page=&#39;) + str(page) % page for page in range(0,100)
  4. ]
  1. Redirect the url to the splash server by modifing def start_requests(self):

E.g. like this

  1. def start_requests(self):
  2. for url in self.start_urls:
  3. yield SplashRequest(url, self.parse,
  4. endpoint=&#39;render.html&#39;,
  5. args={&#39;wait&#39;: 0.5},
  6. )
  1. Parse the response like you do now.

Let me know how that works out for you.

huangapple
  • 本文由 发表于 2020年1月3日 16:35:37
  • 转载请务必保留本文链接:https://go.coder-hub.com/59575378.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定