Scrapy1.4最新官方文档总结 2 Tutorial

浏览: 1372

作者:SeanCheney

链接:https://www.jianshu.com/p/7cc649becf86

來源:简书


这是官方文档的Tutorial(https://docs.scrapy.org/en/latest/intro/tutorial.html)。

推荐四个Python学习资源:

创建项目

使用命令:

scrapy startproject tutorial

会生成以下文件:

在tutorial/spiders文件夹新建文件quotes_spider.py,它的代码如下:

import scrapy

class QuotesSpider(scrapy.Spider):
name = "quotes"

def start_requests(self):
urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)

def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)

start_requests方法返回 scrapy.Request对象。每收到一个,就实例化一个Response对象,并调用和request绑定的调回方法(即parse),将response作为参数。

切换到根目录,运行爬虫:

scrapy crawl quotes

输出日志

根目录下会产生两个文件,quotes-1.html和quotes-2.html。

另一种方法是定义一个包含URLs的类,parse( )是Scrapy默认的调回方法,即使没有指明调回,也会执行:

import scrapy

class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]

def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)

提取信息

学习Scrapy提取信息的最好方法是使用Scrapy Shell,win7 shell运行:

scrapy shell "http://quotes.toscrape.com/page/1/"

或者,gitbash运行,注意有单引号和双引号的区别:

scrapy shell 'http://quotes.toscrape.com/page/1/'

输出如下:

利用CSS进行提取:

>>> response.css('title')
[<Selector xpath='descendant-or-self::title' data='<title>Quotes to Scrape</title>'>]

只提取标题的文本:

>>> response.css('title::text').extract()
['Quotes to Scrape']

::text表示只提取文本,去掉的话,显示如下:

>>> response.css('title').extract()
['<title>Quotes to Scrape</title>']

因为返回对象是一个列表,只提取第一个的话,使用:

>>> response.css('title::text').extract_first()
'Quotes to Scrape'

或者,使用序号:

>>> response.css('title::text')[0].extract()
'Quotes to Scrape'

前者更好,可以避免潜在的序号错误。

除了使用 extract()和 extract_first(),还可以用正则表达式:

>>> response.css('title::text').re(r'Quotes.*')
['Quotes to Scrape']
>>> response.css('title::text').re(r'Q\w+')
['Quotes']
>>> response.css('title::text').re(r'(\w+) to (\w+)')
['Quotes', 'Scrape']

提取日志

XPath简短介绍

Scrapy还支持XPath:

>>> response.xpath('//title')
[<Selector xpath='//title' data='<title>Quotes to Scrape</title>'>]
>>> response.xpath('//title/text()').extract_first()
'Quotes to Scrape'

其实,CSS是底层转化为XPath的,但XPath的功能更为强大,比如它可以选择包含next page的链接。更多见 using XPath with Scrapy Selectors here

继续提取名人名言

http://quotes.toscrape.com的每个名言的HTML结构如下:

<div class="quote">
<span class="text">“The world as we have created it is a process of our
thinking. It cannot be changed without changing our thinking.”</span>
<span>
by <small class="author">Albert Einstein</small>
<a href="/author/Albert-Einstein">(about)</a>
</span>
<div class="tags">
Tags:
<a class="tag" href="/tag/change/page/1/">change</a>
<a class="tag" href="/tag/deep-thoughts/page/1/">deep-thoughts</a>
<a class="tag" href="/tag/thinking/page/1/">thinking</a>
<a class="tag" href="/tag/world/page/1/">world</a>
</div>
</div>

使用:

$ scrapy shell "http://quotes.toscrape.com"

将HTML的元素以列表的形式提取出来:

response.css("div.quote")

只要第一个:

quote = response.css("div.quote")[0]

提取出标题、作者、标签:

>>> title = quote.css("span.text::text").extract_first()
>>> title
'“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'
>>> author = quote.css("small.author::text").extract_first()
>>> author
'Albert Einstein'

标签是一组字符串:

>>> tags = quote.css("div.tags a.tag::text").extract()
>>> tags
['change', 'deep-thoughts', 'thinking', 'world']

弄明白了提取每个名言,现在提取所有的:

>>> for quote in response.css("div.quote"):
... text = quote.css("span.text::text").extract_first()
... author = quote.css("small.author::text").extract_first()
... tags = quote.css("div.tags a.tag::text").extract()
... print(dict(text=text, author=author, tags=tags))
{'tags': ['change', 'deep-thoughts', 'thinking', 'world'], 'author': 'Albert Einstein', 'text': '“The world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.”'}
{'tags': ['abilities', 'choices'], 'author': 'J.K. Rowling', 'text': '“It is our choices, Harry, that show what we truly are, far more than our abilities.”'}
... a few more of these, omitted for brevity
>>>

用爬虫提取信息

使用Python的yield:

import scrapy

class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]

def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}

运行爬虫,日志如下:

2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['life', 'love'], 'author': 'André Gide', 'text': '“It is better to be hated for what you are than to be loved for what you are not.”'}
2016-09-19 18:57:19 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>
{'tags': ['edison', 'failure', 'inspirational', 'paraphrased'], 'author': 'Thomas A. Edison', 'text': "“I have not failed. I've just found 10,000 ways that won't work.”"}

保存数据

最便捷的方式是使用feed export,保存为json,命令如下:

scrapy crawl quotes -o quotes.json

保存为json lines:

scrapy crawl quotes -o quotes.jl

保存为csv:

scrapy crawl quotes -o quotes.csv

提取下一页

首先看下一页的链接:

<ul class="pager">
<li class="next">
<a href="/page/2/">Next <span aria-hidden="true"></span></a>
</li>
</ul>

提取出来:

>>> response.css('li.next a').extract_first()
'<a href="/page/2/">Next <span aria-hidden="true"></span></a>'

只要href:

>>> response.css('li.next a::attr(href)').extract_first()
'/page/2/'

利用urljoin生成完整的url,生成下一页的请求,就可以循环抓取了:

import scrapy

class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]

def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}

next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)

更简洁的方式是使用 response.follow:

import scrapy

class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com/page/1/',
]

def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('span small::text').extract_first(),
'tags': quote.css('div.tags a.tag::text').extract(),
}

next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)

直接将参数传递给response.follow:

for href in response.css('li.next a::attr(href)'):
yield response.follow(href, callback=self.parse)

对于a标签,response.follow可以直接使用它的属性,这样就可以变得更简洁:

for a in response.css('li.next a'):
yield response.follow(a, callback=self.parse)

下面这个爬虫提取作者信息,使用了调回和自动获取下一页:

import scrapy

class AuthorSpider(scrapy.Spider):
name = 'author'

start_urls = ['http://quotes.toscrape.com/']

def parse(self, response):
# 作者链接
for href in response.css('.author + a::attr(href)'):
yield response.follow(href, self.parse_author)

# 分页链接
for href in response.css('li.next a::attr(href)'):
yield response.follow(href, self.parse)

def parse_author(self, response):
def extract_with_css(query):
return response.css(query).extract_first().strip()

yield {
'name': extract_with_css('h3.author-title::text'),
'birthdate': extract_with_css('.author-born-date::text'),
'bio': extract_with_css('.author-description::text'),
}

使用爬虫参数

在命令行中使用参数,只要添加 -a:

scrapy crawl quotes -o quotes-humor.json -a tag=humor

将humor传递给tag:

import scrapy

class QuotesSpider(scrapy.Spider):
name = "quotes"

def start_requests(self):
url = 'http://quotes.toscrape.com/'
tag = getattr(self, 'tag', None)
if tag is not None:
url = url + 'tag/' + tag
yield scrapy.Request(url, self.parse)

def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').extract_first(),
'author': quote.css('small.author::text').extract_first(),
}

next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
yield response.follow(next_page, self.parse)

更多例子

https://github.com/scrapy/quotesbot上有个叫做quotesbot的爬虫,提供了CSS和XPath两种写法:

import scrapy

class ToScrapeCSSSpider(scrapy.Spider):
name = "toscrape-css"
start_urls = [
'http://quotes.toscrape.com/',
]

def parse(self, response):
for quote in response.css("div.quote"):
yield {
'text': quote.css("span.text::text").extract_first(),
'author': quote.css("small.author::text").extract_first(),
'tags': quote.css("div.tags > a.tag::text").extract()
}

next_page_url = response.css("li.next > a::attr(href)").extract_first()
if next_page_url is not None:
yield scrapy.Request(response.urljoin(next_page_url))
import scrapy

class ToScrapeSpiderXPath(scrapy.Spider):
name = 'toscrape-xpath'
start_urls = [
'http://quotes.toscrape.com/',
]

def parse(self, response):
for quote in response.xpath('//div[@class="quote"]'):
yield {
'text': quote.xpath('./span[@class="text"]/text()').extract_first(),
'author': quote.xpath('.//small[@class="author"]/text()').extract_first(),
'tags': quote.xpath('.//div[@class="tags"]/a[@class="tag"]/text()').extract()
}

next_page_url = response.xpath('//li[@class="next"]/a/@href').extract_first()
if next_page_url is not None:
yield scrapy.Request(response.urljoin(next_page_url))

Scrapy1.4最新官方文档总结 1 介绍·安装
Scrapy1.4最新官方文档总结 2 Tutorial
Scrapy1.4最新官方文档总结 3 命令行工具

推荐 0
本文由 Python爱好者社区 创作,采用 知识共享署名-相同方式共享 3.0 中国大陆许可协议 进行许可。
转载、引用前需联系作者,并署名作者且注明文章出处。
本站文章版权归原作者及原出处所有 。内容为作者个人观点, 并不代表本站赞同其观点和对其真实性负责。本站是一个个人学习交流的平台,并不用于任何商业目的,如果有任何问题,请及时联系我们,我们将根据著作权人的要求,立即更正或者删除有关内容。本站拥有对此声明的最终解释权。

0 个评论

要回复文章请先登录注册