Scrapy 框架实战:构建高效的快看漫画分布式爬虫
2025-08-28 16:52:52
  • 0
  • 0
  • 0

一、Scrapy框架概述

Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架,它提供了强大的数据提取能力、灵活的扩展机制以及高效的异步处理性能。其核心架构包括:

● Engine:控制所有组件之间的数据流,当某个动作发生时触发事件

● Scheduler:接收Engine发送的请求并入队,当Engine请求时返回给Engine

● Downloader:负责下载网页内容并将结果返回给Spider

● Spider:用户编写的用于分析响应、提取项目和额外URL的类

● Item Pipeline:负责处理Spider提取的项目,进行数据清洗、验证和存储

二、项目环境搭建

首先,我们需要安装Scrapy和相关的依赖库:

对于分布式爬虫,我们还需要安装和配置Redis服务器作为调度队列。

三、创建Scrapy项目

使用Scrapy命令行工具创建项目:

scrapy startproject kuaikan_crawler

cd kuaikan_crawler

scrapy genspider kuaikan www.kuaikanmanhua.com

四、定义数据模型

在items.py中定义我们需要抓取的数据结构:

import scrapy

class ComicItem(scrapy.Item):

title = scrapy.Field() # 漫画标题

author = scrapy.Field() # 作者

description = scrapy.Field() # 描述

cover_url = scrapy.Field() # 封面URL

tags = scrapy.Field() # 标签

likes = scrapy.Field() # 喜欢数

comments = scrapy.Field() # 评论数

chapters = scrapy.Field() # 章节列表

source_url = scrapy.Field() # 源URL

crawl_time = scrapy.Field() # 爬取时间

五、编写爬虫核心逻辑

在spiders/kuaikan.py中编写爬虫的主要逻辑:

import scrapy

import json

from kuaikan_crawler.items import ComicItem

from urllib.parse import urljoin

class KuaikanSpider(scrapy.Spider):

name = 'kuaikan'

allowed_domains = ['www.kuaikanmanhua.com']

start_urls = ['https://www.kuaikanmanhua.com/web/topic/all/']

def parse(self, response):

# 解析漫画列表页

comics = response.css('.TopicList .topic-item')

for comic in comics:

detail_url = comic.css('a::attr(href)').get()

if detail_url:

yield scrapy.Request(

url=urljoin(response.url, detail_url),

callback=self.parse_comic_detail

)

# 分页处理

next_page = response.css('.next-page::attr(href)').get()

if next_page:

yield scrapy.Request(

url=urljoin(response.url, next_page),

callback=self.parse

)

def parse_comic_detail(self, response):

# 解析漫画详情页

item = ComicItem()

# 提取基本信息

item['title'] = response.css('.comic-title::text').get()

item['author'] = response.css('.author-name::text').get()

item['description'] = response.css('.comic-description::text').get()

item['cover_url'] = response.css('.cover img::attr(src)').get()

item['tags'] = response.css('.tags .tag::text').getall()

item['likes'] = response.css('.like-count::text').get()

item['comments'] = response.css('.comment-count::text').get()

item['source_url'] = response.url

item['crawl_time'] = datetime.now().isoformat()

# 提取章节信息

chapters = []

for chapter in response.css('.chapter-list li'):

chapter_info = {

'title': chapter.css('.chapter-title::text').get(),

'url': urljoin(response.url, chapter.css('a::attr(href)').get()),

'update_time': chapter.css('.update-time::text').get()

}

chapters.append(chapter_info)

item['chapters'] = chapters

yield item

六、实现分布式爬虫

为了将爬虫转换为分布式模式,我们需要使用scrapy-redis组件:

1. 修改settings.py配置文件:

# 启用scrapy-redis调度器

SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# 启用去重过滤器

DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# 设置Redis连接

REDIS_URL = 'redis://localhost:6379/0'

# 保持Redis队列不清空,允许暂停/恢复爬取

SCHEDULER_PERSIST = True

# 设置Item Pipeline

ITEM_PIPELINES = {

'scrapy_redis.pipelines.RedisPipeline': 300,

'kuaikan_crawler.pipelines.MongoPipeline': 400,

}

1. 修改爬虫代码,继承RedisSpider:

from scrapy_redis.spiders import RedisSpider

class DistributedKuaikanSpider(RedisSpider):

name = 'distributed_kuaikan'

redis_key = 'kuaikan:start_urls'

def __init__(self, *args, **kwargs):

super(DistributedKuaikanSpider, self).__init__(*args, **kwargs)

self.allowed_domains = ['www.kuaikanmanhua.com']

def parse(self, response):

# 解析逻辑与之前相同

pass

七、数据存储管道

创建MongoDB存储管道,在pipelines.py中:

import pymongo

from scrapy import settings

class MongoPipeline:

def __init__(self, mongo_uri, mongo_db):

self.mongo_uri = mongo_uri

self.mongo_db = mongo_db

@classmethod

def from_crawler(cls, crawler):

return cls(

mongo_uri=crawler.settings.get('MONGO_URI'),

mongo_db=crawler.settings.get('MONGO_DATABASE', 'scrapy')

)

def open_spider(self, spider):

self.client = pymongo.MongoClient(self.mongo_uri)

self.db = self.client[self.mongo_db]

def close_spider(self, spider):

self.client.close()

def process_item(self, item, spider):

collection_name = item.__class__.__name__

self.db[collection_name].insert_one(dict(item))

return item

在settings.py中添加MongoDB配置:

MONGO_URI = 'mongodb://localhost:27017'

MONGO_DATABASE = 'kuaikan_comics'

八、中间件与反爬虫策略

为了应对网站的反爬虫机制,我们需要添加一些中间件:

# 在middlewares.py中添加随机User-Agent中间件

import random

from scrapy import signals

class RandomUserAgentMiddleware:

def __init__(self, user_agents):

self.user_agents = user_agents

@classmethod

def from_crawler(cls, crawler):

return cls(

user_agents=crawler.settings.get('USER_AGENTS', [])

)

def process_request(self, request, spider):

request.headers['User-Agent'] = random.choice(self.user_agents)

# 在settings.py中配置用户代理列表

USER_AGENTS = [

'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',

'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15',

# 添加更多用户代理...

]

总结

本文详细介绍了如何使用Scrapy框架构建一个高效的分布式漫画爬虫。通过结合Scrapy-Redis实现分布式抓取,使用MongoDB进行数据存储,以及实施多种反反爬虫策略,我们能够构建一个稳定高效的爬虫系统。这种架构不仅可以应用于漫画网站,经过适当修改后也可以用于其他各种类型的网站数据抓取任务。

 
最新文章
相关阅读