最近因為項目需求,需要寫個爬蟲爬取一些題庫。在這之前爬蟲我都是用node或者php寫的。一直聽說python寫爬蟲有一手,便入手了python的爬蟲框架scrapy.
下面簡單的介紹一下scrapy的目錄結構與使用:
首先我們得安裝scrapy框架
pip install scrapy
接著使用scrapy命令創建一個爬蟲項目:
scrapy startproject questions
相關文件簡介:
scrapy.cfg: 項目的配置文件
questions/: 該項目的python模塊。之后您將在此加入代碼。
questions/items.py: 項目中的item文件.
questions/pipelines.py: 項目中的pipelines文件.
questions/settings.py: 項目的設置文件.
questions/spiders/: 放置spider代碼的目錄.
questions/spiders/xueersi.py: 實現爬蟲的主體代碼.
xueersi.py 爬蟲主體
# -*- coding: utf-8 -*-import scrapyimport timeimport numpyimport refrom questions.items import QuestionsItemclass xueersiSpider(scrapy.Spider): name = "xueersi" # 爬蟲名字 allowed_domains = ["tiku.xueersi.com"] # 目標的域名 # 爬取的目標地址 start_urls = [ "http://tiku.xueersi.com/shiti/list_1_1_0_0_4_0_1", "http://tiku.xueersi.com/shiti/list_1_2_0_0_4_0_1", "http://tiku.xueersi.com/shiti/list_1_3_0_0_4_0_1", ] levels = ['偏易','中檔','偏難'] subjects = ['英語','語文','數學'] # 爬蟲開始的時候,自動調用該方法,如果該方法不存在會自動調用parse方法 # def start_requests(self): # yield scrapy.Request('http://tiku.xueersi.com/shiti/list_1_2_0_0_4_0_39',callback=self.getquestion) # start_requests方法不存在時,parse方法自動被調用 def parse(self, response): # xpath的選擇器語法不多介紹,可以直接查看官方文檔 arr = response.xpath("http://ul[@class='pagination']/li/a/text()").extract() total_page = arr[3] # 獲取分頁 for index in range(int(total_page)): yield scrapy.Request(response.url.replace('_0_0_4_0_1',"_0_0_4_0_"+str(index)),callback=self.getquestion) # 發出新的請求,獲取每個分頁所有題目 # 獲取題目 def getquestion(self,response): for res in response.xpath('//div[@class="main-wrap"]/ul[@class="items"]/li'): item = QuestionsItem() # 實例化Item類 # 獲取問題 questions = res.xpath('./div[@class="content-area"]').re(r'<div class="content-area">?([/s/S]+?)<(table|//td|div|br)') if len(questions): # 獲取題目 question = questions[0].strip() item['source'] = question dr = re.compile(r'<[^>]+>',re.S) question = dr.sub('',question) content = res.extract() item['content'] = question # 獲取課目 subject = re.findall(ur'http:////tiku/.xueersi/.com//shiti//list_1_(/d+)',response.url) item['subject'] = self.subjects[int(subject[0])-1] # 獲取難度等級 levels = res.xpath('//div[@class="info"]').re(ur'難度:([/s/S]+?)<') item['level'] = self.levels.index(levels[0])+1 # 獲取選項 options = re.findall(ur'[A-D][/..]([/s/S]+?)<(//td|//p|br)',content) item['options'] = options if len(options): url = res.xpath('./div[@class="info"]/a/@href').extract()[0] request = scrapy.Request(url,callback=self.getanswer) request.meta['item'] = item # 緩存item數據,傳遞給下一個請求 yield request #for option in options: # 獲取答案 def getanswer(self,response): res = response.xpath('//div[@class="part"]').re(ur'<td>([/s/S]+?)<//td>') con = re.findall(ur'([/s/S]+?)<br>[/s/S]+?([A-D])',res[0]) # 獲取含有解析的答案 if con: answer = con[0][1] analysis = con[0][0] # 獲取解析 else: answer = res[0] analysis = '' if answer: item = response.meta['item'] # 獲取item item['answer'] = answer.strip() item['analysis'] = analysis.strip() item['answer_url'] = response.url yield item # 返回item,輸出管道(pipelines.py)會自動接收該數據
新聞熱點
疑難解答