1
votes

I am doing a scrapy crawler. I have a python module that gets urls from a database and should configure scrapy to start a spider for each one of the urls. Because I am starting scrapy from my script, I don't know how to pass it arguments as in the command line switch -a, so that each call receives a different url.

Here goes the code for the scrapy caller

def scrape_next_url() :

conn = _mysql.connect(host, username, password, database_name)
conn.query("select min(sortorder) from url_queue where processed = false for update")
query_result = conn.store_result()
url_index = query_result.fetch_row()[0][0]

conn.query("select url from url_queue where sortorder = " + str(url_index))
query_result = conn.store_result()
url_at_index = query_result.fetch_row()[0][0]

conn.query("update url_queue set processed = true where sortorder = " + str(url_index))
conn.commit()
conn.close()

settings = Settings()
os.environ['SCRAPY_SETTINGS_MODULE'] = 'webscraper.settings'
settings_module_path = os.environ['SCRAPY_SETTINGS_MODULE']
settings.setmodule(settings_module_path, priority='project')

process = CrawlerProcess(settings)
ImageSpider.start_urls.append(url_at_index)
process.crawl(ImageSpider)
process.start()

Help !

Note : I came across one question (Scrapy: Pass arguments to cmdline.execute()), but would like to do it programatically, if possible.

Edit :

I have followed your suggestion and have the following spider code :

    def __init__(self, url=None, *pargs, **kwargs) :
       super(ImageSpider, self).__init__(*pargs, **kwargs)
       self.start_urls.append(url.strip())

On the caller I have :

    process = CrawlerProcess(settings)
    process.crawl(ImageSpider, url=url_at_index)

I know that the argument is being passed to init because if absent the url.strip() call fails. But the result is that the spider runs but doesn't crawl anything :

(webcrawler) faisca:webscraper dlsa$ python scraper_launcher.py 
2017-07-25 00:42:16 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: webscraper)
2017-07-25 00:42:16 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'webscraper', 'NEWSPIDER_MODULE': 'webscraper.spiders', 'SPIDER_MODULES': ['webscraper.spiders']}
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.memusage.MemoryUsage']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-07-25 00:42:16 [scrapy.middleware] INFO: Enabled item pipelines:
['webscraper.pipelines.WebscraperPipeline']
2017-07-25 00:42:16 [scrapy.core.engine] INFO: Spider opened
2017-07-25 00:42:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-07-25 00:42:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
1

1 Answers

0
votes

Pass arguments like this

process.crawl(MySpider(), limit=query_to_run, cursor=cursor, conn=conn)

And then in your Spider

import from scrapy.spiders import CrawlSpider

class MySpider(CrawlSpider):
    # some code here
    def __init__(self, limit=None, cursor=None, conn=None, *args, **kwargs):
            super(MySpider, self).__init__(*args, **kwargs)