爬虫的基本步骤分为:获取,解析,存储。假设这里获取和存储为io密集型(访问网络和数据存储),解析为cpu密集型。那么在设计多线程爬虫时主要有两种方案:第一种方案是一个线程完成三个步骤,然后运行多个线程;第二种方案是每个步骤运行一个多线程,比如N个线程进行获取,1个线程进行解析(多个线程之间切换会降低效率),N个线程进行存储。

下面我们尝试抓取http://www.chembridge.com/ 库存药品信息。

首先确定url为http://www.chembridge.com/search/search.phpsearchType=MFCD&query='+line+'&type=phrase&results=10&search=1,其中line为要搜索的药品信息(要搜索的药品信息保存在本地文件txt中),这里使用requests库进行http请求,获取页面的代码如下:

  1. url='http://www.chembridge.com/search/search.php?searchType=MFCD&query='+line+'&type=phrase&results=10&search=1'
  2. response = requests.get(url,headers=self.headers[0],timeout=20)
  3. html_doc=response.text

页面解析使用beautifulsoup库,部分代码如下:

  1. soup = BeautifulSoup(html_doc, 'lxml')
  2. div=soup.find(id='BBResults')
  3. if div:
  4. links=div.select('a.chemical')
  5. for link in links:
  6. try:
  7. self.get_page_link(link,line)
  8. except Exception as e:
  9. print('%s入库失败:'%line,e)
  10. time.sleep(self.relay*2)
  11. print('%s重新入库'%line)
  12. self.get_page_link(link,line)
  13. continue
  14. print('%s搜索完成'%line)
  1. def get_page_link(self,link,line):
  2. res=[]
  3. href=link.get('href')
  4. print(href)
  5. time.sleep(self.relay*2*random.randint(5,15)/10)
  6. r=requests.get(href,headers=self.headers[1],timeout=20)
  7. if r.status_code==200:
  8. parse_html=r.text
  9. soup1=BeautifulSoup(parse_html, 'lxml')
  10. catalogs=[catalog.get_text() for catalog in soup1.select('form div.matter h2')]#获取catalog
  11. # print(catalogs)
  12. table_headers=[table_header.get_text(strip=True) for table_header in soup1.select('form .matter thead tr')]
  13. if 'AmountPriceQty.' in table_headers:
  14. index=table_headers.index('AmountPriceQty.')
  15. catalog=catalogs[0]
  16. trs=soup1.select('.form tbody tr')
  17. if len(catalogs)>1:
  18. catalog=catalogs[index]
  19. for tr in trs:
  20. if len(tr.select('td'))>1:
  21. row=tuple([catalog])+tuple(td.get_text("|", strip=True) for td in tr.select('td'))
  22. res.append(row)

最后将res保存到mysql数据库:

  1. conn=mysql.connector.connect(host='localhost',user='root', passwd='password', db='test')
  2. cursor = conn.cursor()
  3. sql = 'INSERT INTO chembridge VALUES(%s,%s,%s,%s)'
  4. cursor.executemany(sql,res)
  5. print('入库')
  6. conn.commit()
  7. cursor.close()
  8. conn.close()

一、单线程爬虫封装的完整代码如下:

  1. # -*- coding:utf-8 -*-
  2. import requests,random,time
  3. from bs4 import BeautifulSoup
  4. import mysql.connector
  5.  
  6. class Spider:
  7. def __init__(self):
  8. self.headers=[{
  9. 'Host':'www.chembridge.com',
  10. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  11. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  12. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  13. 'Accept-Encoding':'gzip, deflate',
  14. 'Referer':'http://www.chembridge.com/search/search.php?search=1',
  15. 'Connection':'keep-alive',
  16. 'Upgrade-Insecure-Requests':''
  17. },
  18. {
  19. 'Host':'www.hit2lead.com',
  20. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  21. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  22. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  23. 'Accept-Encoding':'gzip, deflate, br'
  24. }]
  25. self.filename='MDL.txt'
  26.  
  27. def get_page_link(self,link):
  28. res=[]
  29. href=link.get('href')
  30. print(href)
  31. parse_html=requests.get(href,headers=self.headers[1]).text
  32. soup1=BeautifulSoup(parse_html, 'lxml')
  33. catalogs=[catalog.get_text() for catalog in soup1.select('form div.matter h2')]#获取catalog
  34. print(catalogs)
  35. table_headers=[table_header.get_text(strip=True) for table_header in soup1.select('form .matter thead tr')]
  36. print(table_headers)
  37. index=table_headers.index('AmountPriceQty.')
  38. catalog=catalogs[0]
  39. trs=soup1.select('.form tbody tr')
  40. # print(trs)
  41. if len(catalogs)>1:
  42. catalog=catalogs[index]
  43. for tr in trs:
  44. if len(tr.select('td'))>1:
  45. row=tuple([catalog])+tuple(td.get_text("|", strip=True) for td in tr.select('td'))
  46. res.append(row)
  47. print(res)
  48. conn=mysql.connector.connect(host='localhost',user='root', passwd='password', db='test')
  49. cursor = conn.cursor()
  50. sql = 'INSERT INTO chembridge_test2 VALUES(%s,%s,%s,%s)'
  51. cursor.executemany(sql,res)
  52. conn.commit()
  53. cursor.close()
  54. conn.close()
  55.  
  56. def get_page(self,line):
  57. url='http://www.chembridge.com/search/search.php?searchType=MFCD&query='+line+'&type=phrase&results=10&search=1'
  58. try:
  59. response = requests.get(url,headers=self.headers[0],timeout=20)
  60. print(response.status_code)
  61. html_doc=response.text
  62. # print(html_doc)
  63. soup = BeautifulSoup(html_doc, 'lxml')
  64. div=soup.find(id='BBResults')
  65. if div:
  66. links=div.select('a.chemical')
  67. for link in links:
  68. self.get_page_link(link)
  69. relay=random.randint(2,5)/10
  70. print(relay)
  71. time.sleep(relay)
  72. except Exception as e:
  73. print('except:', e)
  74.  
  75. def get_file(self,filename):
  76. i=0
  77. f=open(filename,'r')
  78. for line in f.readlines():
  79. line=line.strip()
  80. print(line)
  81. self.get_page(line)
  82. i=i+1
  83. print('第%s个'%(i))
  84. f.close()
  85.  
  86. def run(self):
  87. self.get_file(self.filename)
  88.  
  89. spider=Spider()
  90. starttime=time.time()
  91. spider.run()
  92. print('耗时:%f s'%(time.time()-starttime))

二、多线程爬虫设计代码

1.第一种设计方案的实现示例:

  1. # -*- coding:utf-8 -*-
  2. from threading import Thread
  3. import threading
  4. from queue import Queue
  5. import os,time,random
  6. import requests,mysql.connector
  7. from bs4 import BeautifulSoup
  8. from openpyxl.workbook import Workbook
  9. from openpyxl.styles import Font
  10.  
  11. class ThreadCrawl(Thread):
  12. def __init__(self,tname,relay):
  13. Thread.__init__(self)
  14. #super(MyThread2, self).__init__()
  15. # self.queue=queue
  16. # self.lock=lock
  17. # self.conn=conn
  18. self.relay=relay*random.randint(5,15)/10
  19. self.tname=tname
  20. self.num_retries=3 #设置尝试重新搜索次数
  21. self.headers=[{
  22. 'Host':'www.chembridge.com',
  23. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  24. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  25. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  26. 'Accept-Encoding':'gzip, deflate',
  27. 'Referer':'http://www.chembridge.com/search/search.php?search=1',
  28. 'Connection':'keep-alive',
  29. 'Upgrade-Insecure-Requests':''
  30. },
  31. {
  32. 'Host':'www.hit2lead.com',
  33. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  34. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  35. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  36. 'Accept-Encoding':'gzip, deflate, br'
  37. }]
  38.  
  39. def run(self):
  40. print('%s 开始爬取'%self.tname)
  41. # line = my_queue.get()
  42. # print(line)
  43. # while not self.queue.empty():
  44. while len(words)>0:
  45. lock.acquire()
  46. line = words[0]
  47. words.pop(0)
  48. lock.release()
  49. self.get_page(line,self.num_retries)
  50. time.sleep(self.relay*random.randint(5,15)/10)
  51.  
  52. while not my_queue.empty():
  53. line=my_queue.get()
  54. print('重新爬取%s...'%line)
  55. self.get_page(line,num_retries=1)
  56. print('%s 结束'%self.tname)
  57.  
  58. #获取页面内容
  59. def get_page(self,line,num_retries=2):
  60. print('%s正在搜索%s...'%(self.tname,line))
  61. # write this thread task
  62. url='http://www.chembridge.com/search/search.php?searchType=MFCD&query='+line+'&type=phrase&results=10&search=1'
  63. try:
  64. response = requests.get(url,headers=self.headers[0],timeout=20)
  65. status=response.status_code
  66. if status==200:
  67. html_doc=response.text
  68. # print(html_doc)
  69. soup = BeautifulSoup(html_doc, 'lxml')
  70. div=soup.find(id='BBResults')
  71. if div:
  72. links=div.select('a.chemical')
  73. for link in links:
  74. try:
  75. self.get_page_link(link,line)
  76. except Exception as e:
  77. print('%s入库失败:'%line,e)
  78. time.sleep(self.relay*2)
  79. print('%s重新入库'%line)
  80. self.get_page_link(link,line)
  81. continue
  82. print('%s搜索完成'%line)
  83. lock.acquire()
  84. global count
  85. count=count+1
  86. print('已完成%s个'%count)
  87. lock.release()
  88. # time.sleep(self.relay*random.randint(5,15)/10)
  89. else:
  90. print('%s搜索%s网络异常,错误代码:%s'%(self.tname,line,status))
  91. # time.sleep(self.relay*random.randint(5,15)/10)
  92. if num_retries>0:
  93. print('%s尝试重新搜索%s'%(self.tname,line))
  94. time.sleep(self.relay*random.randint(5,15)/10)
  95. self.get_page(line,num_retries-1)
  96. else:
  97. print('%s四次搜索失败!!!'%line)
  98. my_queue.put(line)
  99. # error_list.append(line)
  100.  
  101. except Exception as e:
  102. print('%s搜索%s异常,error:'%(self.tname,line), e)
  103. # time.sleep(self.relay*random.randint(5,15)/10)
  104. if num_retries>0:
  105. print('%s尝试重新搜索%s'%(self.tname,line))
  106. time.sleep(self.relay*random.randint(5,15)/10)
  107. self.get_page(line,num_retries-1)
  108. else:
  109. print('%s四次搜索失败!!!'%line)
  110. my_queue.put(line)
  111. # error_list.append(line)
  112. # self.queue.task_done()
  113.  
  114. #获取下一页链接并解析入库
  115. def get_page_link(self,link,line):
  116. res=[]
  117. href=link.get('href')
  118. print(href)
  119. time.sleep(self.relay*2*random.randint(5,15)/10)
  120. r=requests.get(href,headers=self.headers[1],timeout=20)
  121. if r.status_code==200:
  122. parse_html=r.text
  123. soup1=BeautifulSoup(parse_html, 'lxml')
  124. catalogs=[catalog.get_text() for catalog in soup1.select('form div.matter h2')]#获取catalog
  125. # print(catalogs)
  126. table_headers=[table_header.get_text(strip=True) for table_header in soup1.select('form .matter thead tr')]
  127. if 'AmountPriceQty.' in table_headers:
  128. index=table_headers.index('AmountPriceQty.')
  129. catalog=catalogs[0]
  130. trs=soup1.select('.form tbody tr')
  131. if len(catalogs)>1:
  132. catalog=catalogs[index]
  133. for tr in trs:
  134. if len(tr.select('td'))>1:
  135. row=tuple([catalog])+tuple(td.get_text("|", strip=True) for td in tr.select('td'))
  136. res.append(row)
  137. # print(res)
  138. lock.acquire()
  139. conn=mysql.connector.connect(host='localhost',user='root', passwd='password', db='test')
  140. cursor = conn.cursor()
  141. try:
  142. print('%s: %s正在入库...'%(line,catalog))
  143. sql = 'INSERT INTO chembridge VALUES(%s,%s,%s,%s)'
  144. cursor.executemany(sql,res)
  145. conn.commit()
  146. except Exception as e:
  147. print(e)
  148. finally:
  149. cursor.close()
  150. conn.close()
  151. lock.release()
  152.  
  153. def writeToExcel(datas,filename):
  154. # 在内存创建一个工作簿obj
  155. result_wb = Workbook()
  156. #第一个sheet是ws
  157. ws1 = result_wb.worksheets[0]
  158. # ws1=wb1.create_sheet('result',0)
  159. #设置ws的名称
  160. ws1.title = "爬取结果"
  161. row0 = ['catalog', 'amount', 'price', 'qty']
  162. ft = Font(name='Arial', size=11, bold=True)
  163. for k in range(len(row0)):
  164. ws1.cell(row=1,column=k+1).value=row0[k]
  165. ws1.cell(row=1,column=k+1).font=ft
  166. for i in range(1,len(datas)+1):
  167. for j in range(1,len(row0)+1):
  168. ws1.cell(row=i+1,column=j).value=datas[i-1][j-1]
  169. # 工作簿保存到磁盘
  170. result_wb.save(filename = filename)
  171.  
  172. if __name__ == '__main__':
  173. starttime=time.time()
  174. lock = threading.Lock()
  175.  
  176. words=[] # 存放搜索字段的数据
  177. basedir=os.path.abspath(os.path.dirname(__file__))
  178. filename='MDL.txt'
  179. file=os.path.join(basedir,filename) #文件路径
  180. f=open(file,'r')
  181. for line in f.readlines():
  182. line=line.strip()
  183. words.append(line)
  184. f.close()
  185.  
  186. count=0 # 爬取进度计数
  187. # global my_queue
  188. my_queue = Queue() #FIFO队列,存放第一次搜索失败的字段,保证线程同步
  189. error_list=[] #存放最终搜索失败的字段数组
  190. threads=[]
  191.  
  192. # 程序开始前清空数据库chembridge表数据
  193. conn=mysql.connector.connect(host='localhost',user='root', passwd='password', db='test')
  194. cursor = conn.cursor()
  195. print('清空表...')
  196. cursor.execute('delete from chembridge')
  197. conn.commit()
  198. cursor.close()
  199. conn.close()
  200.  
  201. num_threads=10 #设置爬虫数量
  202. relay=10 # 设置爬取时延,时延=relay*(0.5~1.5之间的随机数)
  203. threadList = []
  204. for i in range(1,num_threads+1):
  205. threadList.append('爬虫-%s'%i)
  206. # 开启多线程
  207. for tName in threadList:
  208. thread = ThreadCrawl(tName,relay)
  209. thread.setDaemon(True)
  210. thread.start()
  211. threads.append(thread)
  212. time.sleep(1)
  213. # 主线程阻塞,等待所有子线程运行结束
  214. for t in threads:
  215. t.join()
  216.  
  217. #将数据保存到excel
  218. conn=mysql.connector.connect(host='localhost',user='root', passwd='password', db='test')
  219. cursor = conn.cursor()
  220. cursor.execute('select * from chembridge')
  221. datas=cursor.fetchall()
  222. conn.commit()
  223. cursor.close()
  224. conn.close()
  225. writeToExcel(datas,'result.xlsx')
  226.  
  227. #统计结果
  228. while not my_queue.empty():
  229. error_line=my_queue.get()
  230. error_list.append(error_line)
  231. print('爬取完成!\n')
  232. if len(error_list)==0:
  233. print('爬取失败列表:0个')
  234. else:
  235. print('总共爬取失败%s个:'%len(error_list),','.join(error_list))
  236. # print('爬取完成!')
  237. print('耗时:%f s'%(time.time()-starttime))

words为存放搜索记录的数组,当搜索记录失败时,会立即尝试重新搜索,num_retries为每条记录的最大搜索次数。如果某条记录在搜索num_retries次后仍失败,会把访问失败的word加入my_queue队列中。

当所有words搜索完时,会重新搜索my_queue中的所有word,循环直到my_queue为空(即所有word搜索成功)。

注意:这里要注意python多线程的GIL,修改同一个全局变量要加锁。

运行截图:

2.第二种设计方案的实现示例

urls_queue、html_queue和item_queue3分别存放要访问的url、要解析的页面和爬取到的结果。分别设计三个类,Fetcher类根据url进行简单的抓取,Parser类根据抓取内容进行解析,生成待保存的ItemSaver类进行Item的保存。当urls_queue、html_queue和item_queue3个队列同时为空时,所有子线程终止,任务结束。

  1. # coding=utf-8
  2. import threading
  3. import queue,requests
  4. import time,random
  5. import mysql.connector
  6. from bs4 import BeautifulSoup
  7.  
  8. class Fetcher(threading.Thread):
  9. def __init__(self,urls_queue,html_queue):
  10. threading.Thread.__init__(self)
  11. self.__running=threading.Event()
  12. self.__running.set()
  13. self.urls_queue = urls_queue
  14. self.html_queue = html_queue
  15. self.num_retries=3 #设置尝试重新搜索次数
  16. self.headers={
  17. 'Host':'www.chembridge.com',
  18. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  19. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  20. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  21. 'Accept-Encoding':'gzip, deflate',
  22. 'Referer':'http://www.chembridge.com/search/search.php?search=1',
  23. 'Connection':'keep-alive',
  24. 'Upgrade-Insecure-Requests':''
  25. }
  26.  
  27. def run(self):
  28. while not self.urls_queue.empty():
  29. # while self.__running.isSet():
  30. line=self.urls_queue.get()
  31. print(line)
  32. time.sleep(2*random.randint(5,15)/10)
  33. # self.urls_queue.task_done()
  34. self.get_page(line,self.num_retries)
  35. def get_page(self,line,num_retries=2):
  36. url='http://www.chembridge.com/search/search.php?searchType=MFCD&query='+line+'&type=phrase&results=10&search=1'
  37. try:
  38. response = requests.get(url,headers=self.headers,timeout=20)
  39. status=response.status_code
  40. if status==200:
  41. html_doc=response.text
  42. print(html_doc)
  43. self.html_queue.put(html_doc)
  44. # self.urls_queue.task_done()
  45. print('%s搜索完成'%line)
  46. else:
  47. print('搜索%s网络异常,错误代码:%s'%(line,status))
  48. if num_retries>0:
  49. print('尝试重新搜索%s'%(line))
  50. time.sleep(2*random.randint(5,15)/10)
  51. self.get_page(line,num_retries-1)
  52. else:
  53. print('%s四次搜索失败!!!'%line)
  54. self.urls_queue.put(line)
  55.  
  56. except Exception as e:
  57. print('%s搜索异常,error:'%line,e)
  58. if num_retries>0:
  59. print('尝试重新搜索%s'%(line))
  60. time.sleep(2*random.randint(5,15)/10)
  61. self.get_page(line,num_retries-1)
  62. else:
  63. print('%s四次搜索失败!!!'%line)
  64. self.urls_queue.put(line)
  65.  
  66. def stop(self):
  67. self.__running.clear()
  68.  
  69. class Parser(threading.Thread):
  70. def __init__(self, html_queue,item_queue):
  71. threading.Thread.__init__(self)
  72. self.__running=threading.Event()
  73. self.__running.set()
  74. self.html_queue = html_queue
  75. self.item_queue = item_queue
  76. self.num_retries=3 #设置尝试重新搜索次数
  77. self.headers={
  78. 'Host':'www.hit2lead.com',
  79. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  80. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  81. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  82. 'Accept-Encoding':'gzip, deflate, br'
  83. }
  84. def run(self):
  85. while self.__running.isSet():
  86. print('html_queue长度: ',self.html_queue.qsize())
  87. # if self.html_queue.empty():
  88. # break
  89. html_doc=self.html_queue.get()
  90. try:
  91. soup = BeautifulSoup(html_doc, 'lxml')
  92. div=soup.find(id='BBResults')
  93. if div:
  94. links=div.select('a.chemical')
  95. for link in links:
  96. self.get_page_link(link,self.num_retries)
  97. relay=random.randint(20,50)/10
  98. # print(relay)
  99. time.sleep(relay)
  100. except Exception as e:
  101. self.html_queue.put(html_doc)
  102. # self.html_queue.task_done()
  103.  
  104. def get_page_link(self,link,num_retries=2):
  105. print('haha')
  106. time.sleep(2*random.randint(5,15)/10)
  107. res=[]
  108. href=link.get('href')
  109. print(href)
  110. response=requests.get(href,headers=self.headers,timeout=20)
  111. status=response.status_code
  112. if status==200:
  113. parse_html=response.text
  114. soup1=BeautifulSoup(parse_html, 'lxml')
  115. catalogs=[catalog.get_text() for catalog in soup1.select('form div.matter h2')]#获取catalog
  116. # print(catalogs)
  117. table_headers=[table_header.get_text(strip=True) for table_header in soup1.select('form .matter thead tr')]
  118. # print(table_headers)
  119. if 'AmountPriceQty.' in table_headers:
  120. index=table_headers.index('AmountPriceQty.')
  121. catalog=catalogs[0]
  122. trs=soup1.select('.form tbody tr')
  123. # print(trs)
  124. if len(catalogs)>1:
  125. catalog=catalogs[index]
  126. for tr in trs:
  127. if len(tr.select('td'))>1:
  128. row=tuple([catalog])+tuple(td.get_text("|", strip=True) for td in tr.select('td'))
  129. res.append(row)
  130. # print(res)
  131. self.item_queue.put(res)
  132. else:
  133. print('搜索%s网络异常,错误代码:%s'%(link,status))
  134. # time.sleep(self.relay*random.randint(5,15)/10)
  135. if num_retries>0:
  136. print('尝试重新搜索%s'%(link))
  137. time.sleep(random.randint(5,15)/10)
  138. self.get_page_link(link,num_retries-1)
  139. else:
  140. print('%s四次搜索失败!!!'%line)
  141. def stop(self):
  142. self.__running.clear()
  143.  
  144. class Saver(threading.Thread):
  145. def __init__(self, item_queue):
  146. threading.Thread.__init__(self)
  147. self.__running=threading.Event()
  148. self.__running.set()
  149. self.item_queue = item_queue
  150.  
  151. def run(self):
  152. # while not self.item_queue.empty():
  153. while self.__running.isSet():
  154. print('item_queue长度: ',self.item_queue.qsize())
  155. res=self.item_queue.get()
  156. print(res)
  157. conn=mysql.connector.connect(host='localhost',user='root', passwd='password', db='test')
  158. cursor = conn.cursor()
  159. sql = 'INSERT INTO chembridge_test2 VALUES(%s,%s,%s,%s)'
  160. cursor.executemany(sql,res)
  161. print('入库')
  162. conn.commit()
  163. cursor.close()
  164. conn.close()
  165. def stop(self):
  166. self.__running.clear()
  167.  
  168. if __name__ == '__main__':
  169. starttime=time.time()
  170. lock = threading.Lock()
  171. urls_queue = queue.Queue()
  172. html_queue = queue.Queue()
  173. item_queue = queue.Queue()
  174.  
  175. conn=mysql.connector.connect(host='localhost',user='root', passwd='password', db='test')
  176. cursor = conn.cursor()
  177. print('清空表...')
  178. cursor.execute('delete from chembridge_test2')
  179. conn.commit()
  180. cursor.close()
  181. conn.close()
  182.  
  183. print('start...')
  184.  
  185. f=open('MDL1.txt','r')
  186. for line in f.readlines():
  187. line=line.strip()
  188. urls_queue.put(line)
  189. f.close()
  190.  
  191. threads=[]
  192. for j in range(8):
  193. thread1 = Fetcher(urls_queue,html_queue)
  194. thread1.setDaemon(True)
  195. thread1.start()
  196. threads.append(thread1)
  197. for j in range(1):
  198. thread1 = Parser(html_queue,item_queue)
  199. thread1.setDaemon(True)
  200. thread1.start()
  201. threads.append(thread1)
  202. for j in range(2):
  203. thread1 = Saver(item_queue)
  204. thread1.setDaemon(True)
  205. thread1.start()
  206. threads.append(thread1)
  207.  
  208. # while not urls_queue.empty():
  209. # while not html_queue.empty():
  210. # while not item_queue.empty():
  211. # pass
  212. while True:
  213. time.sleep(0.5)
  214. if urls_queue.empty() and html_queue.empty() and item_queue.empty():
  215. break
  216.  
  217. print('完成!')
  218. for t in threads:
  219. t.stop()
  220. for t in threads:
  221. t.join()
  222. print('end')
  223. print('耗时:%f s'%(time.time()-starttime))

根据网络情况,设置线程数量,避免requests访问网络时阻塞。

另外附上用scrapy实现的代码

items.py

  1. import scrapy
  2.  
  3. class ChemItem(scrapy.Item):
  4. # define the fields for your item here like:
  5. # name = scrapy.Field()
  6. catalog=scrapy.Field()
  7. amount=scrapy.Field()
  8. price=scrapy.Field()
  9. qty=scrapy.Field()

quotes_spider.py

  1. # -*- coding: utf-8 -*-
  2. import scrapy
  3. from scrapy.selector import Selector
  4. from tutorial.items import ChemItem
  5.  
  6. class QuotesSpider(scrapy.Spider):
  7. name = "quotes"
  8. # allowed_domains = ["chembridge.com"]
  9. headers=[{
  10. 'Host':'www.chembridge.com',
  11. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  12. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  13. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  14. 'Accept-Encoding':'gzip, deflate',
  15. 'Referer':'http://www.chembridge.com/search/search.php?search=1',
  16. 'Connection':'keep-alive',
  17. 'Upgrade-Insecure-Requests':''
  18. },
  19. {
  20. 'Host':'www.hit2lead.com',
  21. 'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:53.0) Gecko/20100101 Firefox/53.0',
  22. 'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  23. 'Accept-Language':'zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3',
  24. 'Accept-Encoding':'gzip, deflate, br'
  25. }]
  26. def start_requests(self):
  27. start_urls = []
  28. f=open('MDL.txt','r')
  29. for line in f.readlines():
  30. line=line.strip()
  31. print(line)
  32. start_urls.append('http://www.chembridge.com/search/search.php?searchType=MFCD&query='+line+'&type=phrase&results=10&search=1')
  33. for url in start_urls:
  34. yield scrapy.Request(url=url, callback=self.parse,headers=self.headers[0])
  35.  
  36. def parse(self, response):
  37. links=response.css('#BBResults a.chemical::attr(href)').extract()
  38. for link in links:
  39. yield scrapy.Request(url=link,callback=self.parse_dir_contents,headers=self.headers[1])
  40.  
  41. def parse_dir_contents(self, response):
  42. items=[]
  43. catalogs=response.css('form div.matter h2::text').extract()
  44. table_headers=[''.join(res.re(r'>(.*)</td>')) for res in response.css('form div.matter thead tr')]
  45. print(table_headers)
  46. index=table_headers.index('AmountPriceQty.')
  47. catalog=catalogs[0]
  48. trs=response.css('.form tbody tr')
  49. if len(catalogs)>1:
  50. catalog=catalogs[index]
  51. for tr in trs:
  52. if len(tr.css('td'))>1:
  53. item=ChemItem()
  54. # print(tr.css('td::text').extract())
  55. # row=tuple([catalog])+tuple(td.get_text("|", strip=True) for td in tr.css('td'))
  56. item['catalog']=catalog
  57. item['amount']=tr.css('td')[0].css('::text').extract()[0]
  58. item['price']='|'.join(tr.css('td')[1].css('::text').extract())
  59. print(len(tr.css('td::text')))
  60. item['qty']=tr.css('td')[2].css('::text').extract()[0] if len(tr.css('td')[2].css('::text').extract())==1 else tr.css('td')[2].css('::attr(value)').extract()[0]
  61. # self.log('Saved result %s' % item)
  62. # print(tr.css('td::text')[0].extract())
  63. yield item
  64. # items.append(item)
  65. # return items

pipelines.py

  1. #将数据存储到mysql数据库
  2. from twisted.enterprise import adbapi
  3. import MySQLdb
  4. import MySQLdb.cursors
  5. from scrapy import log
  6.  
  7. class MySQLStorePipeline(object):
  8. def __init__(self, dbpool):
  9. self.dbpool = dbpool
  10.  
  11. #数据库参数
  12. @classmethod
  13. def from_settings(cls, settings):
  14. dbargs = dict(
  15. host=settings['MYSQL_HOST'],
  16. db=settings['MYSQL_DBNAME'],
  17. user=settings['MYSQL_USER'],
  18. passwd=settings['MYSQL_PASSWD'],
  19. charset='utf8',
  20. cursorclass = MySQLdb.cursors.DictCursor,
  21. use_unicode= True,
  22. )
  23. dbpool = adbapi.ConnectionPool('MySQLdb', **dbargs)
  24. return cls(dbpool)
  25.  
  26. # #数据库参数
  27. # def __init__(self):
  28. # dbargs = dict(
  29. # host = 'localhost',
  30. # db = 'test',
  31. # user = 'root',
  32. # passwd = 'password',
  33. # cursorclass = MySQLdb.cursors.DictCursor,
  34. # charset = 'utf8',
  35. # use_unicode = True
  36. # )
  37. # self.dbpool = adbapi.ConnectionPool('MySQLdb',**dbargs)
  38.  
  39. '''
  40. The default pipeline invoke function
  41. '''
  42. def process_item(self, item,spider):
  43. res = self.dbpool.runInteraction(self.insert_into_table,item)
  44. res.addErrback(self.handle_error)
  45. return item
  46. #插入的表,此表需要事先建好
  47. def insert_into_table(self,conn,item):
  48. conn.execute('insert into chembridge(catalog, amount, price,qty) values(%s,%s,%s,%s)', (
  49. item['catalog'],
  50. item['amount'],
  51. # item['star'][0],
  52. item['price'],
  53. item['qty']
  54. ))
  55. def handle_error(self,e):
  56. log.err(e)

settings.py

  1. FEED_EXPORTERS = {
  2. 'csv': 'tutorial.spiders.csv_item_exporter.MyProjectCsvItemExporter',
  3. } #tutorial为工程名
  4.  
  5. FIELDS_TO_EXPORT = [
  6. 'catalog',
  7. 'amount',
  8. 'price',
  9. 'qty'
  10. ]
  11.  
  12. LINETERMINATOR='\n'
  13.  
  14. ITEM_PIPELINES = {
  15. 'tutorial.pipelines.MySQLStorePipeline': 300,
  16. }
  17.  
  18. # start MySQL database configure setting
  19. MYSQL_HOST = 'localhost'
  20. MYSQL_DBNAME = 'test'
  21. MYSQL_USER = 'root'
  22. MYSQL_PASSWD = 'password'
  23. # end of MySQL database configure setting

main.py

  1. # -*- coding: utf-8 -*-
  2. from scrapy import cmdline
  3. cmdline.execute("scrapy crawl quotes -o items.csv -t csv".split())

最后运行main.py,将结果同时保存到csv文件和mysql数据库中。

python多线程爬虫设计及实现示例的更多相关文章

  1. python多线程爬虫+批量下载斗图啦图片项目(关注、持续更新)

    python多线程爬虫项目() 爬取目标:斗图啦(起始url:http://www.doutula.com/photo/list/?page=1) 爬取内容:斗图啦全网图片 使用工具:requests ...

  2. Python多线程爬虫与多种数据存储方式实现(Python爬虫实战2)

    1. 多进程爬虫 对于数据量较大的爬虫,对数据的处理要求较高时,可以采用python多进程或多线程的机制完成,多进程是指分配多个CPU处理程序,同一时刻只有一个CPU在工作,多线程是指进程内部有多个类 ...

  3. Python多线程爬虫爬取电影天堂资源

    最近花些时间学习了一下Python,并写了一个多线程的爬虫程序来获取电影天堂上资源的迅雷下载地址,代码已经上传到GitHub上了,需要的同学可以自行下载.刚开始学习python希望可以获得宝贵的意见. ...

  4. python 多线程爬虫

    最近,一直在做网络爬虫相关的东西. 看了一下开源C++写的larbin爬虫,仔细阅读了里面的设计思想和一些关键技术的实现. 1.larbin的URL去重用的很高效的bloom filter算法: 2. ...

  5. Python多线程爬虫爬取网页图片

    临近期末考试,但是根本不想复习!啊啊啊啊啊啊啊!!!! 于是做了一个爬虫,网址为 https://yande.re,网页图片为动漫美图(图片带点颜色........宅男福利 github项目地址为:h ...

  6. Python多线程爬虫详解

    一.程序进程和线程之间的关系 程序:一个应用就是一个程序,比如:qq,爬虫 进程:程序运行的资源分配最小单位, 很多人学习python,不知道从何学起.很多人学习python,掌握了基本语法过后,不知 ...

  7. Python多线程爬虫

    前言 用上多线程,感觉爬虫跑起来带着风 运行情况 爬取了9万多条文本记录,耗时比较短,一会儿就是几千条 关键点 多个线程对同一全局变量进行修改要加锁 # 获取锁,用于线程同步 threadLock.a ...

  8. python 多线程爬虫 实例

    多进程 Multiprocessing 模块 Process 类用来描述一个进程对象.创建子进程的时候,只需要传入一个执行函数和函数的参数即可完成 Process 示例的创建. star() 方法启动 ...

  9. python多线程爬虫:亚马逊价格

    import re import requests import threading import time from time import ctime,sleep from queue impor ...

随机推荐

  1. python 发包爬取中国移动充值页面---可判断手机号是否异常

    1.用requests.Session()的方式,可以实现自动化管理cookie.session等. 2.具体流程可以抓包分析. 所有请求的参数如要搞清楚需要分析js源码.只能提示一下,一共分为三步: ...

  2. [速成]了解一致性hash算法

    定义 一致性hash算法,在维基百科的定义是: Consistent hashing is a special kind of hashing such that when a hash table ...

  3. 阿里云 Centos7.3安装mysql5.7.18 rpm安装

    卸载MariaDB CentOS7默认安装MariaDB而不是MySQL,而且yum服务器上也移除了MySQL相关的软件包.因为MariaDB和MySQL可能会冲突,故先卸载MariaDB. 1.安装 ...

  4. JavaSE教程-03Java中分支语句与四种进制转换-思维导图

    思维导图看不清楚时: 1)可以将图片另存为图片,保存在本地来查看 2)右击在新标签中打开放大查看 if语句 a) if语句 基本语法结构: if(关系表达式) { 基本语句体 } 执行流程: 首先判断 ...

  5. div的替代品

    人们在标签使用中最常见到的错误之一就是随意将HTML5的<section>等价于<div>--具体地说,就是直接用作替代品(用于样式).在XHTML或者HTML4中,我们常看到 ...

  6. js获取页面宽高

    网页可见区域宽:document.body.clientWidth网页可见区域高:document.body.clientHeight网页可见区域宽:document.body.offsetWidth ...

  7. rsync的用法

    一.用法例子 1.增量备份本地文件#rsync -av ebook/ tmp/ //注意:文件名中最好不要有 :#rsync -avzrtopgL --progress /src /dst 2.本地和 ...

  8. 在Windows上远程运行Linux程序

    1.在Windows主机上安装X Server软件,如Cygwin带的XWin Server 2.在Windows主机上启动X服务器,并将Linux主机设为允许访问该Windows主机上的X服务器. ...

  9. 轻松Angularjs实现表格按指定列排序

    angular表格点击序号进行升序,再次点击进行降序排序,在输入框输入信息,出现相对应数据的那一行. html: <input type="text" ng-model=&q ...

  10. java高并发锁的3种实现

    初级技巧 - 乐观锁 乐观锁适合这样的场景:读不会冲突,写会冲突.同时读的频率远大于写. 以下面的代码为例,悲观锁的实现: Java代码   public Object get(Object key) ...