Queue 示例 - 一个并发网络爬虫¶
Tornado 的 tornado.queues
模块对于协程实现了异步的 生产者 /消费者 模型, 实现了类似于 Python 标准库中线程中的 queue
模块.
一个协程 yield Queue.get
将会在队列中有值时暂停.如果队列设置了最大值, 协程会 yield Queue.put
暂停直到有空间来存放.
Queue
从零开始维护了一系列未完成的任务.put
增加计数; task_done
来减少它.
在这个网络爬虫的例子中, 队列开始仅包含 base_url. 当一个 worker 获取一个页面他会讲链接解析并将其添加到队列中,然后调用 task_done
来减少计数. 最后, 一个worker 获取到页面的 URLs 都是之前抓取过的, 队列中没有剩余的工作要做. worker调用 task_done
将计数减到0 . 主协程中等待 join
, 取消暂停并完成.
- import time
- from datetime import timedelta
- try:
- from HTMLParser import HTMLParser
- from urlparse import urljoin, urldefrag
- except ImportError:
- from html.parser import HTMLParser
- from urllib.parse import urljoin, urldefrag
- from tornado import httpclient, gen, ioloop, queues
- base_url = 'http://www.tornadoweb.org/en/stable/'
- concurrency = 10
- @gen.coroutine
- def get_links_from_url(url):
- """Download the page at `url` and parse it for links.
- Returned links have had the fragment after `#` removed, and have been made
- absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes
- 'http://www.tornadoweb.org/en/stable/gen.html'.
- """
- try:
- response = yield httpclient.AsyncHTTPClient().fetch(url)
- print('fetched %s' % url)
- html = response.body if isinstance(response.body, str) \
- else response.body.decode()
- urls = [urljoin(url, remove_fragment(new_url))
- for new_url in get_links(html)]
- except Exception as e:
- print('Exception: %s%s' % (e, url))
- raise gen.Return([])
- raise gen.Return(urls)
- def remove_fragment(url):
- pure_url, frag = urldefrag(url)
- return pure_url
- def get_links(html):
- class URLSeeker(HTMLParser):
- def __init__(self):
- HTMLParser.__init__(self)
- self.urls = []
- def handle_starttag(self, tag, attrs):
- href = dict(attrs).get('href')
- if href and tag == 'a':
- self.urls.append(href)
- url_seeker = URLSeeker()
- url_seeker.feed(html)
- return url_seeker.urls
- @gen.coroutine
- def main():
- q = queues.Queue()
- start = time.time()
- fetching, fetched = set(), set()
- @gen.coroutine
- def fetch_url():
- current_url = yield q.get()
- try:
- if current_url in fetching:
- return
- print('fetching %s' % current_url)
- fetching.add(current_url)
- urls = yield get_links_from_url(current_url)
- fetched.add(current_url)
- for new_url in urls:
- # Only follow links beneath the base URL
- if new_url.startswith(base_url):
- yield q.put(new_url)
- finally:
- q.task_done()
- @gen.coroutine
- def worker():
- while True:
- yield fetch_url()
- q.put(base_url)
- # Start workers, then wait for the work queue to be empty.
- for _ in range(concurrency):
- worker()
- yield q.join(timeout=timedelta(seconds=300))
- assert fetching == fetched
- print('Done in %d seconds, fetched %s URLs.' % (
- time.time() - start, len(fetched)))
- if __name__ == '__main__':
- import logging
- logging.basicConfig()
- io_loop = ioloop.IOLoop.current()
- io_loop.run_sync(main)
原文:
https://tornado-zh-cn.readthedocs.io/zh_CN/latest/guide/queues.html