scrapy需要爬取链接中链接的内容,需要怎么处理?
0
def parse(self, response):
item = TencentItem()
for each in response.xpath('//tr[@class="even"] | //tr[@class="odd"]'):
item['Positionname'] = each.xpath('./td[1]/a/text()').extract()[0]
item['Detailslink'] = 'http://hr.tencent.com/' + each.xpath('./td[1]/a/@href').extract()[0]
item['Positioncategory'] = each.xpath('./td[2]/text()').extract()
item['peoplenumber'] = each.xpath('./td[3]/text()').extract()
item['Workingplace'] = each.xpath('./td[4]/text()').extract()
item['Releasetime'] = each.xpath('./td[5]/text()').extract()
item = TencentItem()
for each in response.xpath('//tr[@class="even"] | //tr[@class="odd"]'):
item['Positionname'] = each.xpath('./td[1]/a/text()').extract()[0]
item['Detailslink'] = 'http://hr.tencent.com/' + each.xpath('./td[1]/a/@href').extract()[0]
item['Positioncategory'] = each.xpath('./td[2]/text()').extract()
item['peoplenumber'] = each.xpath('./td[3]/text()').extract()
item['Workingplace'] = each.xpath('./td[4]/text()').extract()
item['Releasetime'] = each.xpath('./td[5]/text()').extract()
没有找到相关结果
重要提示:提问者不能发表回复,可以通过评论与回答者沟通,沟通后可以通过编辑功能完善问题描述,以便后续其他人能够更容易理解问题.
1 个回复
一只写程序的猿 - 一个圣骑士成熟的标志是不再向盲人解释阳光。公众号:Python攻城狮 2018-01-18 回答
赞同来自: