<li>
標簽,我們只需要一步一步的從<ol>
向下搜索,可以得到電影對應的名稱,即<span class="titile">肖申克的救贖</span>
這一行接著看一下網頁內 后頁按鈕對應的代碼結構 https://movie.douban.com/top250?start=25&filter=
最后一頁這沒有這個標簽 對應None 這樣我們就可以進行翻頁了 直接上代碼獲取html代碼 這里使用requests模塊,獲取很方便import requests# 獲取目標網頁htmldef download_page(url):# 偽裝成瀏覽器 headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' } data = requests.get(url, headers=headers).content return data解析html 獲取到html源碼后就要對其進行解析了,這里使用BeautifulSoup模塊from bs4 import BeautifulSoupURL='https://movie.douban.com/top250'# 解析html 方法一 (這里的寫法參考了某博主的代碼)def parse_html(html): # 獲取BeautifulSoup 對象 soup = BeautifulSoup(html,'lxml') movie_name_list = [] # 先獲取最外層ol movie_list_soup = soup.find('ol', attrs={'class':'grid_view'}) # 獲取每個列表<li> for movie_li in movie_list_soup.find_all('li'): detail = movie_li.find('div', attrs={'class':'hd'}) movie_name = detail.find('span', attrs={'class':'title'}).getText()# 這里名稱要用getText()獲取相應內容 movie_name_list.append(movie_name) next_page = soup.find('span',attrs={'class':'next'}).find('a') if next_page: return movie_name_list,URL+next_page['href'] return movie_name_list,Nonefrom bs4 import BeautifulSoupURL='https://movie.douban.com/top250'# 解析html方法2 這里用了一些BeautifulSoup的新特性 用起來比較方便def parse_html1(html): soup = BeautifulSoup(html, 'lxml'); movie_names = [] movie_list = soup.select('ol.grid_view li div.item div.info div.hd a') for movie_title in movie_list: movie_name = movie_title.find('span',class_='title') movie_names.append(movie_name.getText()) next_page = soup.find('span',class_='next').find('a') if next_page: return movie_names,URL+next_page['href'] return movie_names,None匯總一下,并把獲取到的名字列表寫進文件中import requestsfrom bs4 import BeautifulSoupURL='https://movie.douban.com/top250'# 獲取目標網頁htmldef download_page(url): headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' } data = requests.get(url, headers=headers).content return data# 解析htmldef parse_html1(html): soup = BeautifulSoup(html, 'lxml'); movie_names = [] movie_list = soup.select('ol.grid_view li div.item div.info div.hd a') for movie_title in movie_list: movie_name = movie_title.find('span',class_='title') movie_names.append(movie_name.getText()) next_page = soup.find('span',class_='next').find('a') if next_page: return movie_names,URL+next_page['href'] return movie_names,Nonedef main(): url = URL with codecs.open('e:/movies.txt','wb',encoding='utf-8') as fp: while url: html = download_page(url) movies,url=parse_html1(html) for movie_name in movies: fp.write(movie_name) fp.write('/r/n')if __name__=='__main__': main()新聞熱點
疑難解答