国产成人精品久久免费动漫-国产成人精品天堂-国产成人精品区在线观看-国产成人精品日本-a级毛片无码免费真人-a级毛片毛片免费观看久潮喷

您的位置:首頁技術文章
文章詳情頁

python - scrapy中使用CrawlSpider,匹配不到urls

瀏覽:76日期:2022-07-18 10:45:15

問題描述

我的爬蟲代碼如下,其中rules無獲取,不知道是什么問題?

#encoding: utf-8import reimport requestsimport timefrom bs4 import BeautifulSoupimport scrapyfrom scrapy.http import Requestfrom craler.items import CralerItemimport urllib2from scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import LinkExtractorfrom scrapy.contrib.linkextractors.sgml import SgmlLinkExtractorclass MoyanSpider(CrawlSpider): try:name = ’maoyan’allowed_domains = ['http://maoyan.com']start_urls = [’http://maoyan.com/films’]rules = ( Rule(LinkExtractor(allow=(r'films/d+.*')), callback=’parse_item’, follow=True),) except Exception, e:print e.message # # def start_requests(self): # for i in range(22863): # url = self.start_urls + str(i*30) # # yield Request(url,self.parse, headers=self.headers) def parse_item(self, response):item = CralerItem()# time.sleep(2)# moveis = BeautifulSoup(response.text, ’lxml’).find('p',class_='movies-list').find_all('dd') try: time.sleep(2) item[’name’] = response.find('p',class_='movie-brief-container').find('h3',class_='name').get_text() item[’score’] = response.find('p',class_='movie-index-content score normal-score').find('span',class_='stonefont').get_text() url = 'http://maoyan.com'+response.find('p',class_='channel-detail movie-item-title').find('a')['href'] #item[’url’] = url item[’id’] = response.url.split('/')[-1] # html = requests.get(url).content # soup = BeautifulSoup(html,’lxml’) temp= response.find('p','movie-brief-container').find('ul').get_text() temp = temp.split(’n’) #item[’cover’] = soup.find('p','avater-shadow').find('img')['src'] item[’tags’] = temp[1] item[’countries’] = temp[3].strip() item[’duration’] = temp[4].split(’/’)[-1] item[’time’] = temp[6] #print item[’name’] return itemexcept Exception, e: print e.message

運行報錯的提醒:

C:Python27python.exe 'C:Program Files (x86)JetBrainsPyCharm Community Edition 2016.2.2helperspydevpydevd.py' --multiproc --qt-support --client 127.0.0.1 --port 12779 --file D:/scrapy/craler/entrypoint.pypydev debugger: process 30468 is connectingConnected to pydev debugger (build 162.1967.10)D:/scrapy/cralercralerspidersmaoyan.py:12: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors` is deprecated, use `scrapy.linkextractors` instead from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractorD:/scrapy/cralercralerspidersmaoyan.py:12: ScrapyDeprecationWarning: Module `scrapy.contrib.linkextractors.sgml` is deprecated, use `scrapy.linkextractors.sgml` instead from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor2017-05-08 21:58:14 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: craler)2017-05-08 21:58:14 [scrapy.utils.log] INFO: Overridden settings: {’NEWSPIDER_MODULE’: ’craler.spiders’, ’ROBOTSTXT_OBEY’: True, ’SPIDER_MODULES’: [’craler.spiders’], ’HTTPCACHE_ENABLED’: True, ’BOT_NAME’: ’craler’, ’COOKIES_ENABLED’: False, ’DOWNLOAD_DELAY’: 3}2017-05-08 21:58:14 [scrapy.middleware] INFO: Enabled extensions:[’scrapy.extensions.logstats.LogStats’, ’scrapy.extensions.telnet.TelnetConsole’, ’scrapy.extensions.corestats.CoreStats’]2017-05-08 21:58:14 [py.warnings] WARNING: D:/scrapy/cralercralermiddlewares.py:11: ScrapyDeprecationWarning: Module `scrapy.contrib.downloadermiddleware.useragent` is deprecated, use `scrapy.downloadermiddlewares.useragent` instead from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware2017-05-08 21:58:14 [scrapy.middleware] INFO: Enabled downloader middlewares:[’scrapy.downloadermiddlewares.retry.RetryMiddleware’, ’scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware’, ’craler.middlewares.RotateUserAgentMiddleware’, ’scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware’, ’scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware’, ’scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware’, ’scrapy.downloadermiddlewares.useragent.UserAgentMiddleware’, ’scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware’, ’scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware’, ’scrapy.downloadermiddlewares.redirect.RedirectMiddleware’, ’scrapy.downloadermiddlewares.stats.DownloaderStats’, ’scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware’]2017-05-08 21:58:15 [scrapy.middleware] INFO: Enabled spider middlewares:[’scrapy.spidermiddlewares.httperror.HttpErrorMiddleware’, ’scrapy.spidermiddlewares.offsite.OffsiteMiddleware’, ’scrapy.spidermiddlewares.referer.RefererMiddleware’, ’scrapy.spidermiddlewares.urllength.UrlLengthMiddleware’, ’scrapy.spidermiddlewares.depth.DepthMiddleware’]2017-05-08 21:58:15 [scrapy.middleware] INFO: Enabled item pipelines:[’craler.pipelines.CralerPipeline’]2017-05-08 21:58:15 [scrapy.core.engine] INFO: Spider opened2017-05-08 21:58:15 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2017-05-08 21:58:15 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:60232017-05-08 21:58:15 [root] INFO: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)2017-05-08 21:58:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://maoyan.com/robots.txt> (referer: None) [’cached’]2017-05-08 21:58:15 [root] INFO: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.502017-05-08 21:58:15 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://maoyan.com/films> (referer: None) [’cached’]2017-05-08 21:58:15 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to ’maoyan.com’: <GET http://maoyan.com/films/248683>2017-05-08 21:58:15 [scrapy.core.engine] INFO: Closing spider (finished)2017-05-08 21:58:15 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{’downloader/request_bytes’: 534, ’downloader/request_count’: 2, ’downloader/request_method_count/GET’: 2, ’downloader/response_bytes’: 6913, ’downloader/response_count’: 2, ’downloader/response_status_count/200’: 2, ’finish_reason’: ’finished’, ’finish_time’: datetime.datetime(2017, 5, 8, 13, 58, 15, 357000), ’httpcache/hit’: 2, ’log_count/DEBUG’: 4, ’log_count/INFO’: 9, ’log_count/WARNING’: 1, ’offsite/domains’: 1, ’offsite/filtered’: 30, ’request_depth_max’: 1, ’response_received_count’: 2, ’scheduler/dequeued’: 1, ’scheduler/dequeued/memory’: 1, ’scheduler/enqueued’: 1, ’scheduler/enqueued/memory’: 1, ’start_time’: datetime.datetime(2017, 5, 8, 13, 58, 15, 140000)}2017-05-08 21:58:15 [scrapy.core.engine] INFO: Spider closed (finished)Process finished with exit code 0

問題解答

回答1:

主要是 allow_domains的問題,你的提取規(guī)則是沒問題的,代碼這樣寫就能抓鏈接了

# encoding: utf-8import timefrom tutorial.items import CrawlerItemfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import LinkExtractorclass MoyanSpider(CrawlSpider): name = ’maoyan’ allowed_domains = ['maoyan.com'] start_urls = [’http://maoyan.com/films’] rules = (Rule(LinkExtractor(allow=(r'films/d+.*')), callback=’parse_item’, follow=True), ) def parse_item(self, response):print(response.url)item = CrawlerItem()try: time.sleep(2) item[’name’] = response.text.find('p', class_='movie-brief-container').find('h3', class_='name').get_text() item[’score’] = response.text.find('p', class_='movie-index-content score normal-score').find('span', class_='stonefont').get_text() url = 'http://maoyan.com' + response.text.find('p', class_='channel-detail movie-item-title').find('a')['href'] item[’id’] = response.url.split('/')[-1] temp = response.text.find('p', 'movie-brief-container').find('ul').get_text() temp = temp.split(’n’) item[’tags’] = temp[1] item[’countries’] = temp[3].strip() item[’duration’] = temp[4].split(’/’)[-1] item[’time’] = temp[6] return itemexcept Exception as e: print(e)

主要就是allow_domain別帶上http://字符串。

另外,你的解析模塊有點問題,我沒給你修改,有數(shù)據(jù)了自己應該也能改。

另外,吐槽一下前面的同學,根本就沒調試人家的代碼,也這樣強答,明顯在誤導人嘛

回答2:

有幾個模塊組件已經(jīng)棄用了,讓你換個別的相似模塊使用

回答3:

只是警告,沒有錯誤。可能你爬取的網(wǎng)站做了防爬蟲措施,導致你無法正常獲取。

標簽: Python 編程
主站蜘蛛池模板: 黄色三级网络 | 亚洲男人的天堂久久精品 | a毛片视频免费观看影院 | 久9视频这里只有精品 | 久久国产视频在线观看 | 114一级毛片免费 | 国产精品亚洲精品日韩已方 | 欧美成a人免费观看久久 | 日本一区二区三区精品视频 | 亚洲精品在线免费观看视频 | 国产久草在线 | 国产精品国产三级国产在线观看 | 久草在线免费资源 | 91视频一88av | 亚洲日本一区二区三区 | 久久国产一片免费观看 | 成人毛片免费观看视频大全 | 特级淫片国产免费高清视频 | 爱啪网亚洲第一福利网站 | 欧美大片在线观看成人 | 国产在线一区二区三区四区 | 亚洲国产精品久久网午夜 | 黄色美女免费网站 | 偷拍自拍日韩 | 国产日产欧产精品精品推荐小说 | 在线观看亚洲国产 | 久久草在线免费 | 成人区精品一区二区不卡亚洲 | 日韩一区二区三区在线免费观看 | 91精品宅男在线观看 | 久久一区二区精品综合 | 欧美亚洲国产片在线观看 | 欧美亚洲中日韩中文字幕在线 | 成人综合国产乱在线 | 亚洲91| 人成午夜性刺激免费 | 欧美另类videosbestsex久久 | 毛片免费看看 | 久草视频观看 | 国产美女主播一级成人毛片 | 欧美成人高清手机在线视频 |