python - 如何在scrapy中帶cookie訪(fǎng)問(wèn)?
問(wèn)題描述
簡(jiǎn)單的通過(guò)scrapy訪(fǎng)問(wèn)雪球都報(bào)錯(cuò),我知道要先訪(fǎng)問(wèn)一次雪球,需要cookie信息才能真正打開(kāi)連接。scrapy據(jù)說(shuō)可以不用在意cookie,會(huì)自動(dòng)獲取cookie。我按照這個(gè)連接在middleware里已經(jīng)啟用cookie,http://stackoverflow.com/ques...,但為什么還是會(huì)返回404錯(cuò)誤?搜索了幾天都沒(méi)找到答案。郁悶啊,求幫忙給個(gè)簡(jiǎn)單代碼如何訪(fǎng)問(wèn),謝謝了
class XueqiuSpider(scrapy.Spider): name = 'xueqiu' start_urls = 'https://xueqiu.com/stock/f10/finmainindex.json?symbol=SZ000001&page=1&size=1' headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8','Accept-Language': 'zh-CN,zh;q=0.8','Connection': 'keep-alive','Host': 'www.zhihu.com','Upgrade-Insecure-Requests': '1','User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.109 Safari/537.36' } def __init__(self, url = None):self.user_url = url def start_requests(self):yield scrapy.Request( url = self.start_urls, headers = self.headers, meta = {’cookiejar’: 1 }, callback = self.request_captcha) def request_captcha(self,response):print response
錯(cuò)誤日志。
2017-03-04 12:42:02 [scrapy.core.engine] INFO: Spider opened2017-03-04 12:42:02 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2017-03-04 12:42:02 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023********Current UserAgent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6************2017-03-04 12:42:12 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <200 https://xueqiu.com/robots.txt>Set-Cookie: aliyungf_tc=AQAAAGFYbBEUVAQAPSHDc8pHhpYZKUem; Path=/; HttpOnly2017-03-04 12:42:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://xueqiu.com/robots.txt> (referer: None)********Current UserAgent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6************2017-03-04 12:42:12 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <404 https://xueqiu.com/stock/f10/finmainindex.json?symbol=SZ000001&page=1&size=1>Set-Cookie: aliyungf_tc=AQAAAPTfyyJNdQUAPSHDc8KmCkY5slST; Path=/; HttpOnly2017-03-04 12:42:12 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://xueqiu.com/stock/f10/finmainindex.json?symbol=SZ000001&page=1&size=1> (referer: None)2017-03-04 12:42:12 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://xueqiu.com/stock/f10/finmainindex.json?symbol=SZ000001&page=1&size=1>: HTTP status code is not handled or not allowed2017-03-04 12:42:12 [scrapy.core.engine] INFO: Closing spider (finished)2017-03-04 12:42:12 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
問(wèn)題解答
回答1:又試了一下.. 確實(shí)不需要登錄哦.. 是我想多了... 直接先請(qǐng)求一下xueqiu.com,拿到cookie后再請(qǐng)求一下API的地址就可以了.. 原來(lái)如此..
==============羞恥的分割線(xiàn)=============
經(jīng)我驗(yàn)證,你需要登錄...
import scrapyimport hashlibfrom scrapy.http import FormRequest, Requestclass XueqiuScrapeSpider(scrapy.Spider): name = 'xueqiu_scrape' allowed_domains = ['xueqiu.com'] def start_requests(self):m = hashlib.md5()m.update(b'your password') # 在這里填入你的密碼password = m.hexdigest().upper()form_data={ 'telephone': 'your account', # 在這里填入你的用戶(hù)名 'password': password, 'remember_me': str(), 'areacode': '86',}print(form_data)return [FormRequest( url='https://xueqiu.com/snowman/login', formdata=form_data, meta={'cookiejar': 1}, callback=self.loged_in )] def loged_in(self, response):# print(response.url)return [Request( url='https://xueqiu.com/stock/f10/finmainindex.json?symbol=SZ000001&page=1&size=1', meta={'cookiejar': response.meta['cookiejar']}, callback=self.get_result, )] def get_result(self, response):print(response.body)
另外,網(wǎng)站確實(shí)對(duì)User-Agent進(jìn)行了驗(yàn)證,可以在settings.py中進(jìn)行設(shè)置,當(dāng)然自己寫(xiě)在爬蟲(chóng)文件里也可以。密碼是MD5加密后的字符串。哦對(duì),補(bǔ)充一點(diǎn),因?yàn)槲沂怯檬謾C(jī)注冊(cè)的,所以form_data是這些字段,如果你是其他方式,只需要用Chrome工具看一下POST請(qǐng)求有哪些參數(shù),自己修改一下form_data的內(nèi)容就行了。
回答2:哈哈,謝謝咯,解決了幾天的困惑。之前也通過(guò)request來(lái)做不需要登錄,貼下代碼,
session = requests.Session()session.headers = { ’User-Agent’: ’Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36’}session.get(’https://xueqiu.com’)for page in range(1,100): url = ’https://xueqiu.com/stock/f10/finmainindex.json?symbol=SZ000001&page=%s&size=1’ % page print url r = session.get(url)#print r.json().list a = r.text
相關(guān)文章:
1. java - Web開(kāi)發(fā) - POI導(dǎo)出帶有下拉框的Excel和解決下拉中數(shù)組過(guò)多而產(chǎn)生的異常2. Python做掃描,發(fā)包速度實(shí)在是太慢了,有優(yōu)化的方案嗎?3. javascript - 關(guān)于定時(shí)器 與 防止連續(xù)點(diǎn)擊 問(wèn)題4. objective-c - ios百度地圖定位問(wèn)題5. java - 微信退款,公賬號(hào)向個(gè)人轉(zhuǎn)賬SSL驗(yàn)證失敗6. python - 使用xlsxwriter寫(xiě)入Excel, 只能寫(xiě)入65536 無(wú)法繼續(xù)寫(xiě)入.7. python - flask如何創(chuàng)建中文列名的數(shù)據(jù)表8. java - 安卓接入微信登錄,onCreate不會(huì)執(zhí)行9. 微信開(kāi)放平臺(tái) - Android調(diào)用微信分享不顯示10. python - mysql 如何設(shè)置通用型字段? 比如像mongodb那樣
