收集链接中的下拉列表,使用请求。

huangapple go评论83阅读模式
英文:

Collect the Dropdown List from Link using Request

问题

import pandas as pd
from requests import Session
import os, time, sys
from datetime import datetime

s = Session()
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
                         'AppleWebKit/537.36 (KHTML, like Gecko) '\
                         'Chrome/75.0.3770.80 Safari/537.36'}
# Add headers
s.headers.update(headers)

URL = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp'
params = {'symbolCode': 9999, 'symbol': 'BANKNIFTY', 'instrument': '-', 'date': '9JAN2020', 'segmentLink': 17}
res = s.get(URL, params=params)

df1 = pd.read_html(res.content)[0]
df2 = pd.read_html(res.content)[1]

以上是您提供的代码部分的翻译。如果您还有其他需要翻译的内容或问题,请随时提问。

英文:

I have a link as below:

url = "https://nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?segmentLink=17&instrument=OPTIDX&symbol=BANKNIFTY&date=9JAN2020"

I want to collect all the Expiry Date available as per the image below:

收集链接中的下拉列表,使用请求。

My Code:
########################
import pandas as pd
from requests import Session
import os, time, sys
from datetime import datetime

s = Session()
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
                         'AppleWebKit/537.36 (KHTML, like Gecko) '\
                         'Chrome/75.0.3770.80 Safari/537.36'}
# Add headers
s.headers.update(headers)

URL = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp'
params = {'symbolCode':9999,'symbol':'BANKNIFTY','instrument': '-','date': '9JAN2020','segmentLink': 17}
res = s.get(URL, params=params)

df1 = pd.read_html(res.content)[0]
df2 = pd.read_html(res.content)[1]

Not able to get the values in df1 nor df2

答案1

得分: 3

import requests
import lxml.html

url = 'https://nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?segmentLink=17&instrument=OPTIDX&symbol=BANKNIFTY&date=9JAN2020'

r = requests.get(url)
soup = lxml.html.fromstring(r.text)

items = soup.xpath('//form[@id="ocForm"]//option/text()')
print(items)

Result

[' Select ', '9JAN2020', '16JAN2020', '23JAN2020', '30JAN2020', '6FEB2020', '13FEB2020', '20FEB2020', '27FEB2020', '5MAR2020', '26MAR2020']

英文:

It needs minimal knowlege of requests and BeautifulSoup or lxml

import requests
import lxml.html

url = 'https://nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?segmentLink=17&instrument=OPTIDX&symbol=BANKNIFTY&date=9JAN2020'

r = requests.get(url)
soup = lxml.html.fromstring(r.text)

items = soup.xpath('//form[@id="ocForm"]//option/text()')
print(items)

Result

[' Select ', '9JAN2020', '16JAN2020', '23JAN2020', '30JAN2020', '6FEB2020', '13FEB2020', '20FEB2020', '27FEB2020', '5MAR2020', '26MAR2020']

答案2

得分: 0

import pandas as pd
from requests import Session
import lxml.html

s = Session()
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
            'AppleWebKit/537.36 (KHTML, like Gecko) '\
            'Chrome/75.0.3770.80 Safari/537.36'}
# Add headers
s.headers.update(headers)

URL = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp'
params = {'symbolCode': 9999, 'symbol': 'BANKNIFTY', 'instrument': 'OPTIDX', 'date': '-', 'segmentLink': 17}
res = s.get(URL, params=params)
soup = lxml.html.fromstring(res.text)
items = soup.xpath('//form[@id="ocForm"]//option/text()')
print(items)

text = pd.read_html(res.content)[0].loc[0, 1]
print(text)
英文:
import pandas as pd
from requests import Session
import lxml.html


s = Session()
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
			'AppleWebKit/537.36 (KHTML, like Gecko) '\
			'Chrome/75.0.3770.80 Safari/537.36'}
# Add headers
s.headers.update(headers)

URL = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp'
params = {'symbolCode':9999,'symbol':'BANKNIFTY','instrument': 'OPTIDX','date': '-','segmentLink': 17}
res = s.get(URL, params=params)
soup = lxml.html.fromstring(res.text)
items = soup.xpath('//form[@id="ocForm"]//option/text()')
print(items)

text = pd.read_html(res.content)[0].loc[0, 1]
print(text)

huangapple
  • 本文由 发表于 2020年1月3日 19:20:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/59577693.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定