ProxyPool¶
****************************************************************
*** ______ ********************* ______ *********** _ ********
*** | ___ \_ ******************** | ___ \ ********* | | ********
*** | |_/ / \__ __ __ _ __ _ | |_/ /___ * ___ | | ********
*** | __/| _// _ \ \ \/ /| | | || __// _ \ / _ \ | | ********
*** | | | | | (_) | > < \ |_| || | | (_) | (_) || |___ ****
*** \_| |_| \___/ /_/\_\ \__ |\_| \___/ \___/ \_____/ ****
**** __ / / *****
************************* /___ / *******************************
************************* ********************************
****************************************************************
Python爬虫代理IP池
安装¶
- 下载代码
$ git clone git@github.com:jhao104/proxy_pool.git
- 安装依赖
$ pip install -r requirements.txt
- 更新配置
HOST = "0.0.0.0"
PORT = 5000
DB_CONN = 'redis://@127.0.0.1:8888'
PROXY_FETCHER = [
"freeProxy01",
"freeProxy02",
# ....
]
- 启动项目
$ python proxyPool.py schedule
$ python proxyPool.py server
使用¶
- API
- 爬虫
import requests
def get_proxy():
return requests.get("http://127.0.0.1:5010/get?type=https").json()
def delete_proxy(proxy):
requests.get("http://127.0.0.1:5010/delete/?proxy={}".format(proxy))
# your spider code
def getHtml():
# ....
retry_count = 5
proxy = get_proxy().get("proxy")
while retry_count > 0:
try:
html = requests.get('https://www.example.com', proxies={"http": "http://{}".format(proxy), "https": "https://{}".format(proxy)})
# 使用代理访问
return html
except Exception:
retry_count -= 1
# 删除代理池中代理
delete_proxy(proxy)
return None