国产成人精品久久免费动漫-国产成人精品天堂-国产成人精品区在线观看-国产成人精品日本-a级毛片无码免费真人-a级毛片毛片免费观看久潮喷

您的位置:首頁(yè)技術(shù)文章
文章詳情頁(yè)

python - 為什么在scrapy 的settings.py里啟用了: USER_AGENT 就什么也采不到了? 一關(guān)了就能采集到網(wǎng)頁(yè)

瀏覽:83日期:2022-07-20 17:38:28

問(wèn)題描述

采集的百度貼吧

python 2.7.11

scrapy 1.3.3

只要是在settings.py里啟用了user_agent,不管用下面的哪種方法.都什么也采不到.

而關(guān)了這個(gè)user_agent .都能正常采集.這很奇怪?不知道是什么原因?

USER_AGENT = ’xxxxxxxxxxxxxxxxxxxxxx’

還是寫(xiě)一個(gè)中間件class RotateUserAgentMiddleware(UserAgentMiddleware):

在settings.py里設(shè)置

DOWNLOADER_MIDDLEWARES = {

#’tbtest.middlewares.MyCustomDownloaderMiddleware’: 543,’tbtest.useragent.RotateUserAgentMiddleware’: 400,

}

只要啟用了user_agent 就什么也采不到.運(yùn)行后.輸出下面代碼:

E:pyprotbtest>scrapy crawl tbs2017-05-11 12:20:23 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: tbtest)2017-05-11 12:20:23 [scrapy.utils.log] INFO: Overridden settings: {’NEWSPIDER_MODULE’: ’tbtest.spiders’, ’ROBOTSTXT_OBEY’: True, ’SPIDER_MODULES’: [’tbtest.spiders’], ’BOT_NAME’: ’tbtest’, ’COOKIES_ENABLED’: False, ’DOWNLOAD_DELAY’: 2}2017-05-11 12:20:24 [scrapy.middleware] INFO: Enabled extensions:[’scrapy.extensions.logstats.LogStats’, ’scrapy.extensions.telnet.TelnetConsole’, ’scrapy.extensions.corestats.CoreStats’]2017-05-11 12:20:26 [scrapy.middleware] INFO: Enabled downloader middlewares:[’scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware’, ’scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware’, ’scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware’, ’scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware’, ’tbtest.useragent.RotateUserAgentMiddleware’, ’scrapy.downloadermiddlewares.useragent.UserAgentMiddleware’, ’scrapy.downloadermiddlewares.retry.RetryMiddleware’, ’scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware’, ’scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware’, ’scrapy.downloadermiddlewares.redirect.RedirectMiddleware’, ’scrapy.downloadermiddlewares.stats.DownloaderStats’]2017-05-11 12:20:26 [scrapy.middleware] INFO: Enabled spider middlewares:[’scrapy.spidermiddlewares.httperror.HttpErrorMiddleware’, ’scrapy.spidermiddlewares.offsite.OffsiteMiddleware’, ’scrapy.spidermiddlewares.referer.RefererMiddleware’, ’scrapy.spidermiddlewares.urllength.UrlLengthMiddleware’, ’scrapy.spidermiddlewares.depth.DepthMiddleware’]2017-05-11 12:20:27 [scrapy.middleware] INFO: Enabled item pipelines:[’tbtest.pipelines.TbtestPipeline’]2017-05-11 12:20:27 [scrapy.core.engine] INFO: Spider opened2017-05-11 12:20:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2017-05-11 12:20:27 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023********Current UserAgent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3************2017-05-11 12:20:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://tieba.baidu.com/robots.txt> (referer: None)********Current UserAgent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5************2017-05-11 12:20:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://tieba.baidu.com/f?kw=%E5%B1%B1%E4%B8%9C%E7%90%86%E5%B7%A5%E5%A4%A7%E5%AD%A6&ie=utf-8> (referer: None)2017-05-11 12:20:31 [scrapy.core.engine] INFO: Closing spider (finished)2017-05-11 12:20:31 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{’downloader/request_bytes’: 655, ’downloader/request_count’: 2, ’downloader/request_method_count/GET’: 2, ’downloader/response_bytes’: 87876, ’downloader/response_count’: 2, ’downloader/response_status_count/200’: 2, ’finish_reason’: ’finished’, ’finish_time’: datetime.datetime(2017, 5, 11, 4, 20, 31, 375000), ’log_count/DEBUG’: 3, ’log_count/INFO’: 7, ’response_received_count’: 2, ’scheduler/dequeued’: 1, ’scheduler/dequeued/memory’: 1, ’scheduler/enqueued’: 1, ’scheduler/enqueued/memory’: 1, ’start_time’: datetime.datetime(2017, 5, 11, 4, 20, 27, 250000)}2017-05-11 12:20:31 [scrapy.core.engine] INFO: Spider closed (finished)

# -*- coding:utf-8 -*-import logging##'''避免被ban策略之一:使用useragent池。 使用注意:需在settings.py中進(jìn)行相應(yīng)的設(shè)置。''' import randomfrom scrapy.downloadermiddlewares.useragent import UserAgentMiddlewareclass RotateUserAgentMiddleware(UserAgentMiddleware): def __init__(self, user_agent=’’): self.user_agent = user_agent def process_request(self, request, spider): ua = random.choice(self.user_agent_list) if ua: #顯示當(dāng)前使用的useragent print '********Current UserAgent:%s************' %ua #記錄 ##logging.log(logging.WARNING, ’Current UserAgent: ’+ua)request.headers.setdefault(’User-Agent’, ua) #the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape #for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php user_agent_list =['Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)','Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)','Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)','Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)','Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)','Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)','Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)','Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)','Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6','Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1','Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0','Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5','Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20','Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52',]

不管用下面的哪種方法.都什么也采不到.

而關(guān)了這個(gè)user_agent .都能正常采集.這很奇怪?不知道是什么原因?

問(wèn)題解答

回答1:

你爬去的網(wǎng)站可能做了一些防爬蟲(chóng)措施

回答2:

反爬了,scrapy會(huì)有自己定義的useragent,啟用后會(huì)加到head里面,不啟用可能是空,或者沒(méi)有被反爬,建議做一個(gè)useragent的池模仿瀏覽器,定期或者隨機(jī)更換,這樣最保險(xiǎn)

回答3:

是User-Agent,不是User_Agent,我以前也有這個(gè)問(wèn)題,之后改了就行了

標(biāo)簽: Python 編程
相關(guān)文章:
主站蜘蛛池模板: 国产中文字幕在线免费观看 | 亚洲精品久久9热 | 欧美一级视频在线高清观看 | 香港经典a毛片免费观看爽爽影院 | 欧美综合成人 | 日韩午夜在线视频 | 一级片网站在线观看 | 国产日韩欧美在线一二三四 | 国产成人精品免费午夜 | 婷婷在线成人免费观看搜索 | 欧美人成在线 | 精品欧美一区二区三区精品久久 | 欧美一级特黄一片免费 | 91精品国产一区二区三区四区 | 香蕉亚洲精品一区二区 | 久久精品人人爽人人爽快 | 中文字幕综合 | 日韩精品在线观看免费 | 国产视频亚洲 | 成在线人免费视频 | 欧美一级免费 | 色樱桃影院亚洲精品影院 | 久草5| 伊大人香蕉久久网 | 亚洲无吗视频 | 欧美综合自拍亚洲综合百度 | 高清国产美女一级a毛片 | 国产伦精品一区三区视频 | 新版天堂资源中文8在线 | 97一级毛片全部免费播放 | 免费视频久久 | 男人的天堂在线 | a级国产乱理伦片在线观看99 | 俄罗斯一级成人毛片 | 成人午夜精品久久不卡 | 欧美一级特黄aa大片在线观看免费 | 韩国美女高清爽快一级毛片 | 国产成人精品区在线观看 | 成人网18免费网站在线 | 成人欧美视频在线看免费 | 久久精品国产91久久综合麻豆自制 |