单纯的从每个栏目去爬取是不显示的,转换一下思路,看到搜索页面,有时间~,有时间!
注意看URL链接
https://zzk.cnblogs.com/s/blogpost?Keywords=python&datetimerange=Customer&from=2019-01-01&to=2019-01-01
这个链接得到之后,其实用一个比较简单的思路就可以获取到所有python相关的文章了,迭代时间。
下面编写核心代码,比较重要的几个点,我单独提炼出来。
页面搜索的时候因为加了验证,所以你必须要获取到你本地的cookie,这个你很容易得到
字典生成器的语法是时候去复习一下了
import scrapy
from scrapy import Request,Selector
import time
import datetime
'''
遇到不懂的问题?Python学习交流群:821460695满足你的需求,资料都已经上传群文件,可以自行下载!
'''
class BlogsSpider(scrapy.Spider):
name = 'Blogs'
allowed_domains = ['zzk.cnblogs.com']
start_urls = ['http://zzk.cnblogs.com/']
from_time = "2010-01-01"
end_time = "2010-01-01"
keywords = "python"
page =1
url = "https://zzk.cnblogs.com/s/blogpost?Keywords={keywords}&datetimerange=Customer&from={from_time}&to={end_time}&pageindex={page}"
custom_settings = {
"DEFAULT_REQUEST_HEADERS":{
"HOST":"zzk.cnblogs.com",
"TE":"Trailers",
"referer": "https://zzk.cnblogs.com/s/blogpost?w=python",
"upgrade-insecure-requests": "1",
"user-agent": "Mozilla/5.0 Gecko/20100101 Firefox/64.0"
}
}
def start_requests(self):
cookie_str = "想办法自己获取到"
self.cookies = {item.split("=")[0]: item.split("=")[1] for item in cookie_str.split("; ")}
yield Request(self.url.format(keywords=self.keywords,from_time=self.from_time,end_time=self.end_time,page=self.page),cookies=self.cookies,callback=self.parse)
页面爬取完毕之后,需要进行解析,获取翻页页码,同时将时间+1天,下面的代码重点看时间叠加部分的操作。
def parse(self, response):
print("正在爬取",response.url)
count = int(response.css('#CountOfResults::text').extract_first()) # 获取是否有数据
if count>0:
for page in range(1,int(count/10)+2):
# 抓取详细数据
yield Request(self.url.format(keywords=self.keywords,from_time=self.from_time,end_time=self.end_time,page=page),cookies=self.cookies,callback=self.parse_detail,dont_filter=True)
time.sleep(2)
# 跳转下一个日期
d = datetime.datetime.strptime(self.from_time, '%Y-%m-%d')
delta = datetime.timedelta(days=1)
d = d + delta
self.from_time = d.strftime('%Y-%m-%d')
self.end_time =self.from_time
yield Request(
self.url.format(keywords=self.keywords, from_time=self.from_time, end_time=self.end_time, page=self.page),
cookies=self.cookies, callback=self.parse, dont_filter=True)
页面解析入库
本部分操作逻辑没有复杂点,只需要按照流程编写即可,运行代码,跑起来,在mongodb等待一些时间
db.getCollection('dict').count({})
返回
372352条数据
def parse_detail(self,response):
items = response.xpath('//div[@class="searchItem"]')
for item in items:
title = item.xpath('h3[@class="searchItemTitle"]/a//text()').extract()
title = "".join(title)
author = item.xpath(".//span[@class='searchItemInfo-userName']/a/text()").extract_first()
public_date = item.xpath(".//span[@class='searchItemInfo-publishDate']/text()").extract_first()
pv = item.xpath(".//span[@class='searchItemInfo-views']/text()").extract_first()
if pv:
pv = pv[3:-1]
url = item.xpath(".//span[@class='searchURL']/text()").extract_first()
#print(title,author,public_date,pv)
yield {
"title":title,
"author":author,
"public_date":public_date,
"pv":pv,
"url":url
}
数据入库
一顿操作猛如虎,数据就到手了~后面可以做一些简单的数据分析