9 - Comments
  1. Scrapy Python
  2. Scrapinghub
  3. Scrappy Definition
  4. Scrapy Python
  5. Python Scrapy Tutorial
  6. Scrapy Tutorial
  7. Scrappy Dictionary

Scrapy A Fast and Powerful Scraping and Web Crawling Framework An open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way. Maintained by Zyte (formerly Scrapinghub) and many other contributors. In Gerapy, You can create a configurable project and then configure and generate code of Scrapy automatically. But this module is unstable, we're trying to refine it. Also you can drag your Scrapy Project to projects folder. Then refresh web, it will appear in the Project Index Page and comes to un-configurable, but you can edit this project. Scrapy (/ ˈskreɪpaɪ / SKRAY-peye) is a free and open-source web-crawling framework written in Python. Originally designed for web scraping, it can also be used to extract data using APIs or as a general-purpose web crawler. It is currently maintained by Zyte formerly Scrapinghub, a web-scraping development and services company.

Scrapy is a fast high-level web crawling and web scraping framework, usedto crawl websites and extract structured data from their pages. It can be usedfor a wide range of purposes, from data mining to monitoring and automatedtesting.

  1. Scrapy project An open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.
  2. Hey, wanna see cool videos?) I create funny mincraft videos! Many interesting videos, cursed mincraft minecraft we will be right back and to be continued.

Getting help¶

Having trouble? We’d like to help!

  • Try the FAQ – it’s got answers to some common questions.

  • Looking for specific information? Try the Index or Module Index.

  • Ask or search questions in StackOverflow using the scrapy tag.

  • Ask or search questions in the Scrapy subreddit.

  • Search for questions on the archives of the scrapy-users mailing list.

  • Ask a question in the #scrapy IRC channel,

  • Report bugs with Scrapy in our issue tracker.

First steps¶

Scrapy at a glance

Scrapy Python

Understand what Scrapy is and how it can help you.

Installation guide

Get Scrapy installed on your computer.

Scrapy Tutorial

Write your first Scrapy project.


Learn more by playing with a pre-made Scrapy project.

Basic concepts¶

Command line tool

Learn about the command-line tool used to manage your Scrapy project.


Write the rules to crawl your websites.


Extract the data from web pages using XPath.

Scrapy shell

Test your extraction code in an interactive environment.


Define the data you want to scrape.

Item Loaders

Populate your items with the extracted data.

Item Pipeline

Post-process and store your scraped data.

Feed exports

Output your scraped data using different formats and storages.

Requests and Responses

Understand the classes used to represent HTTP requests and responses.

Link Extractors

Convenient classes to extract links to follow from pages.


Learn how to configure Scrapy and see all .


See all available exceptions and their meaning.

Built-in services¶


Learn how to use Python’s builtin logging on Scrapy.

Stats Collection

Collect statistics about your scraping crawler.

Sending e-mail


Send email notifications when certain events occur.

Telnet Console

Inspect a running crawler using a built-in Python console.

Web Service

Monitor and control a crawler using a web service.

Solving specific problems¶

Frequently Asked Questions

Get answers to most frequently asked questions.

Debugging Spiders

Learn how to debug common problems of your Scrapy spider.

Spiders Contracts

Learn how to use contracts for testing your spiders.

Common Practices

Get familiar with some Scrapy common practices.

Broad Crawls

Tune Scrapy for crawling a lot domains in parallel.

Using your browser’s Developer Tools for scraping

Learn how to scrape with your browser’s developer tools.

Selecting dynamically-loaded content

Read webpage data that is loaded dynamically.

Debugging memory leaks

Learn how to find and get rid of memory leaks in your crawler.

Downloading and processing files and images

Download files and/or images associated with your scraped items.

Scrappy Definition

Deploying Spiders

Deploying your Scrapy spiders and run them in a remote server.

AutoThrottle extension

Adjust crawl rate dynamically based on load.


Check how Scrapy performs on your hardware.

Jobs: pausing and resuming crawls

Learn how to pause and resume crawls for large spiders.


Use the coroutine syntax.


Use asyncio and asyncio-powered libraries.

Extending Scrapy¶

Architecture overview

Understand the Scrapy architecture.

Downloader Middleware

Customize how pages get requested and downloaded.

Spider Middleware

Customize the input and output of your spiders.


Extend Scrapy with your custom functionality

Core API

Scrapy Python

Use it on extensions and middlewares to extend Scrapy functionality


See all available signals and how to work with them.

Python Scrapy Tutorial


Understand the scheduler component.

Item Exporters

Quickly export your scraped items to a file (XML, CSV, etc).

All the rest¶

Scrapy tutorial
Release notes

Scrapy Tutorial

See what has changed in recent Scrapy versions.

Scrappy Dictionary

Contributing to Scrapy

Learn how to contribute to the Scrapy project.

Versioning and API stability

Understand Scrapy versioning and API stability.

Recent Pages