2
votes

I've been learning how to use scrapy though I had minimal experience in python to begin with. I started learning how to scrape using the BaseSpider. Now I'm trying to crawl websites but I've encountered a problem that has really confuzzled me. Here is the example code from the official site at http://doc.scrapy.org/topics/spiders.html.

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']

    rules = (
        # Extract links matching 'category.php' (but not matching 'subsection.php')
        # and follow links from them (since no callback means follow=True by     default).
        Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php',     ))),

        # Extract links matching 'item.php' and parse them with the spider's method     parse_item
        Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),)

    def parse_item(self, response):
        print "WHY WONT YOU WORK!!!!!!!!"
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = TestItem()
        item['id'] = hxs.select('//td[@id="item_id"]/text()').re(r'ID: (\d+)')
        item['name'] = hxs.select('//td[@id="item_name"]/text()').extract()
        item['description'] =     hxs.select('//td[@id="item_description"]/text()').extract()
        return item

The only change I made is the statement:

print "WHY WONT YOU WORK!!!!!!!!"

But since I'm not seeing this print statement at runtime, I fear that this function isn't being reached. This is the code I took directly from the official scrapy site. What am I doing wrong or misunderstanding?

2
@buffer : Based on the metadata, it does seem like the links are extracted but not being passed to parse_item. My rule looks like this rules = ( Rule(SgmlLinkExtractor(),follow=True,callback="parse_item", ), )ProgrammingAnt

2 Answers

1
votes
start_urls = ['http://www.example.com']

example.com doesn't have any links for categories or items. This is just an example of what a scraped site URL might be.

This is a non-working example in the documentation.

0
votes

You might try making a spider that you know works, and see if print statements do anything where you have them. I think I remember trying to do the same thing a long time ago, and that they wont show up, even if the code is executed.