国产成人精品久久免费动漫-国产成人精品天堂-国产成人精品区在线观看-国产成人精品日本-a级毛片无码免费真人-a级毛片毛片免费观看久潮喷

您的位置:首頁技術(shù)文章
文章詳情頁

python - scrapy pipeline報(bào)錯(cuò)求助

瀏覽:108日期:2022-08-09 08:55:51

問題描述

由于不太清楚傳輸?shù)臋C(jī)制,卡在SCRAPY傳輸?shù)倪@個(gè)問題上近半個(gè)月,翻閱了好多資料,還是不懂,基礎(chǔ)比較差所以上來求助各位老師!不涉及自定義就以SCRAPY默認(rèn)的格式為例spider return的東西需要什么樣的格式?dict?{a:1,b:2,.....}還是[{a:1,aa:11},{b:2,bb:22},{......}]return的東西傳去哪了?是不是下面代碼的item?

class pipeline : def process_item(self, item, spider):

我真的是很菜,但是我很想學(xué)希望能得到各位老師的幫助!下面是我的代碼,希望能指出缺點(diǎn)

spider:

# -*- coding: utf-8 -*-import scrapyfrom pm25.items import Pm25Itemimport reclass InfospSpider(scrapy.Spider): name = 'infosp' allowed_domains = ['pm25.com'] start_urls = [’http://www.pm25.com/rank/1day.html’, ] def parse(self, response):item = Pm25Item()re_time = re.compile('d+-d+-d+')date = response.xpath('/html/body/p[4]/p/p/p[2]/span').extract()[0] #單獨(dú)解析出DATE# items = []selector = response.selector.xpath('/html/body/p[5]/p/p[3]/ul[2]/li') #從response里確立解析范圍for subselector in selector: #通過范圍逐條解析 try: #防止[0]報(bào)錯(cuò)rank = subselector.xpath('span[1]/text()').extract()[0] quality = subselector.xpath('span/em/text()')[0].extract()city = subselector.xpath('a/text()').extract()[0]province = subselector.xpath('span[3]/text()').extract()[0]aqi = subselector.xpath('span[4]/text()').extract()[0]pm25 = subselector.xpath('span[5]/text()').extract()[0] except IndexError:print(rank,quality,city,province,aqi,pm25) item[’date’] = re_time.findall(date)[0] item[’rank’] = rank item[’quality’] = quality item[’province’] = city item[’city’] = province item[’aqi’] = aqi item[’pm25’] = pm25 # items.append(item) yield item #這里不懂該怎么用,出來的是什么格式, #有的教程會(huì)return items,所以希望能得到指點(diǎn)

pipeline:

import timeclass Pm25Pipeline(object): def process_item(self, item, spider):today = time.strftime('%y%m%d',time.localtime())fname = str(today) + '.txt'with open(fname,'a') as f: for tmp in item: #不知道這里是否寫的對, #個(gè)人理解是spider return出來的item是yiled dict #[{a:1,aa:11},{b:2,bb:22},{......}]f.write(tmp['date'] + ’t’ +tmp['rank'] + ’t’ +tmp['quality'] + ’t’ +tmp['province'] + ’t’ +tmp['city'] + ’t’ +tmp['aqi'] + ’t’ +tmp['pm25'] + ’n’) f.close()return item

items:

import scrapyclass Pm25Item(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() date = scrapy.Field() rank = scrapy.Field() quality = scrapy.Field() province = scrapy.Field() city = scrapy.Field() aqi = scrapy.Field() pm25 = scrapy.Field() pass

部分運(yùn)行報(bào)錯(cuò)代碼:

Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’30’, ’city’: ’新疆’, ’date’: ’2017-04-02’, ’pm25’: ’13 ’, ’province’: ’伊犁哈薩克州’, ’quality’: ’優(yōu)’, ’rank’: ’357’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’28’, ’city’: ’西藏’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’林芝’, ’quality’: ’優(yōu)’, ’rank’: ’358’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’28’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’麗江’, ’quality’: ’優(yōu)’, ’rank’: ’359’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’27’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’15 ’, ’province’: ’玉溪’, ’quality’: ’優(yōu)’, ’rank’: ’360’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’26’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’10 ’, ’province’: ’楚雄州’, ’quality’: ’優(yōu)’, ’rank’: ’361’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’24’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’11 ’, ’province’: ’迪慶州’, ’quality’: ’優(yōu)’, ’rank’: ’362’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.scraper] ERROR: Error processing {’aqi’: ’22’, ’city’: ’云南’, ’date’: ’2017-04-02’, ’pm25’: ’9 ’, ’province’: ’怒江州’, ’quality’: ’優(yōu)’, ’rank’: ’363’}Traceback (most recent call last): File 'd:python35libsite-packagestwistedinternetdefer.py', line 653, in _runCallbacks current.result = callback(current.result, *args, **kw) File 'D:pypropm25pm25pipelines.py', line 23, in process_item tmp['pm25'] + ’n’TypeError: string indices must be integers2017-04-03 10:23:14 [scrapy.core.engine] INFO: Closing spider (finished)2017-04-03 10:23:14 [scrapy.statscollectors] INFO: Dumping Scrapy stats:{’downloader/request_bytes’: 328, ’downloader/request_count’: 1, ’downloader/request_method_count/GET’: 1, ’downloader/response_bytes’: 38229, ’downloader/response_count’: 1, ’downloader/response_status_count/200’: 1, ’finish_reason’: ’finished’, ’finish_time’: datetime.datetime(2017, 4, 3, 2, 23, 14, 972356), ’log_count/DEBUG’: 2, ’log_count/ERROR’: 363, ’log_count/INFO’: 7, ’response_received_count’: 1, ’scheduler/dequeued’: 1, ’scheduler/dequeued/memory’: 1, ’scheduler/enqueued’: 1, ’scheduler/enqueued/memory’: 1, ’start_time’: datetime.datetime(2017, 4, 3, 2, 23, 13, 226730)}2017-04-03 10:23:14 [scrapy.core.engine] INFO: Spider closed (finished)

希望能到到各位老師的幫助再次感謝~!

問題解答

回答1:

直接寫入就行,不用做循環(huán),item是單個(gè)處理,并不是你想的那樣的列表:

import timeclass Pm25Pipeline(object): def process_item(self, item, spider):today = time.strftime('%y%m%d', time.localtime())fname = str(today) + '.txt'with open(fname, 'a') as f: f.write(item['date'] + ’t’ + item['rank'] + ’t’ + item['quality'] + ’t’ + item['province'] + ’t’ + item['city'] + ’t’ + item['aqi'] + ’t’ + item['pm25'] + ’n’ )f.close()return item回答2:

搜索:TypeError: string indices must be integers,搞清楚什么問題定位行數(shù),解決問題

回答3:

Scrapy的Item類似python字典,擴(kuò)展了一些功能而已。

Scrapy的設(shè)計(jì),每生成一個(gè)Item,即可傳遞到pipeline中處理。你在里面寫的for tmp in item循環(huán)的是item字典的鍵了,鍵應(yīng)是字符串,再用__getitem__語法就會(huì)提示你使用的不是數(shù)字。

回答4:

你可以把一個(gè)item看作一個(gè)字典,實(shí)際它就是dict類的派生類。你在pipeline里對這個(gè)item直接遍歷,取到的tmp實(shí)際是都是字典的鍵,類型是字符串,所以tmp[’pm25’]這種操作報(bào)出TypeError:string類型的對象索引必須是int型。

標(biāo)簽: Python 編程
相關(guān)文章:
主站蜘蛛池模板: 久久91亚洲精品久久91综合 | 亚洲男人天堂久久 | 自拍视频在线 | 亚洲视频中文字幕在线 | aaaaaa精品视频在线观看 | 99精品久久久久久 | 日本免费a级片 | 国产在线成人精品 | 亚洲精品午夜在线观看 | 日韩亚洲国产综合久久久 | 国产精品合集一区二区 | 欧美成年黄网站色视频 | 能看毛片的网址 | 亚洲国产精品综合久久一线 | 国内精品美女写真视频 | 九九久久国产精品 | 成人看的午夜免费毛片 | 99久久免费国产精精品 | av在线天堂网 | 亚洲第一视频网站 | 国产99视频精品一区 | 欧美一级特黄刺激爽大片 | 国产成人香蕉久久久久 | 自拍自录videosfree自拍自录 | 亚洲免费专区 | 亚洲影院在线播放 | 亚洲精品国自产拍在线观看 | 欧美怡红院在线观看 | 日本天堂网址 | 久久久久久青草大香综合精品 | 手机免费黄色网址 | 国内精品久久久久久久久蜜桃 | 亚洲美女视频在线观看 | 91精品啪在线观看国产91九色 | 91网站国产 | 久久久视频在线 | 亚洲成人综合网站 | 成人怡红院视频在线观看 | 一级做a爰片久久毛片免费看 | 国产女人伦码一区二区三区不卡 | 狠狠ady精品 |