python 2.7 - Scrapy first tutorial dmoz returning en error "TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead." -



python 2.7 - Scrapy first tutorial dmoz returning en error "TypeError: Can't use implementer with classes. Use one of the class-declaration functions instead." -

getting error when running first tutorial scrapy. scrapy : 0.22.2 lxml : 3.3.5.0 libxml2 : 2.7.8 twisted : 12.0.0 python : 2.7.2 (default, oct 11 2012, 20:14:37) - [gcc 4.2.1 compatible apple clang 4.0 (tags/apple/clang-418.0.60)] platform: darwin-12.5.0-x86_64-i386-64bit

this file items.py:

from scrapy.item import item, field class dmozitem(item) title=field() link=field() desc=field()

my file dmoz_spider.py: scrapy.spider import basespider

class dmozspider(basespider): name = "dmoz" allowed_domains= ["dmoz.org"] start_urls = [ "http://www.dmoz.org/computers/programming/languages/python/books/", "http://www.dmoz.org/computers/programming/languages/python/resources/" ] def parse(self, response): filename = response.url.split("/")[-2] open(filename, 'wb').write(response.body)

this error message when running "scrapy crawl dmoz"

foolios-imac-2:tutorial foolio$ scrapy crawl dmoz /usr/local/share/tutorial/tutorial/spiders/dmoz_spider.py:3: scrapydeprecationwarning: tutorial.spiders.dmoz_spider.dmozspider inherits deprecated class scrapy.spider.basespider, please inherit scrapy.spider.spider. (warning on first subclass, there may others) class dmozspider(basespider):

2014-06-19 14:53:00-0500 [scrapy] info: scrapy 0.22.2 started (bot: tutorial) 2014-06-19 14:53:00-0500 [scrapy] info: optional features available: ssl, http11 2014-06-19 14:53:00-0500 [scrapy] info: overridden settings: {'newspider_module':'tutorial.spiders', 'spider_modules': ['tutorial.spiders'], 'bot_name': 'tutorial'} 2014-06-19 14:53:00-0500 [scrapy] info: enabled extensions: logstats, telnetconsole,closespider, webservice, corestats, spiderstate traceback (most recent phone call last):

file "/usr/local/bin/scrapy", line 5, in pkg_resources.run_script('scrapy==0.22.2', 'scrapy') file "/system/library/frameworks/python.framework/versions/2.7/extras/lib/python/pkg_resources.py", line 489, in run_script self.require(requires)[0].run_script(script_name, ns) file "/system/library/frameworks/python.framework/versions/2.7/extras/lib/python/pkg_resources.py", line 1207, in run_script execfile(script_filename, namespace, namespace) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/egg-info/scripts/scrapy", line 4, in execute() file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/cmdline.py", line 143, in execute _run_print_help(parser, _run_command, cmd, args, opts) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/cmdline.py", line 89, in _run_print_help func(*a, **kw) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/cmdline.py", line 150, in _run_command cmd.run(args, opts) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/commands/crawl.py", line 50, in run self.crawler_process.start() file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 92, in start if self.start_crawling(): file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 124, in start_crawling homecoming self._start_crawler() not none file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 139, in _start_crawler crawler.configure() file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/crawler.py", line 47, in configure self.engine = executionengine(self, self._spider_closed) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/core/engine.py", line 63, in init self.downloader = downloader(crawler) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/init.py", line 73, in init self.handlers = downloadhandlers(crawler) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/init.py", line 18, in init cls = load_object(clspath) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/utils/misc.py", line 40, in load_object mod = import_module(module) file "/system/library/frameworks/python.framework/versions/2.7/lib/python2.7/importlib/init.py", line 37, in import_module import(name) file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/s3.py", line 4, in .http import httpdownloadhandler file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/http.py", line 5, in .http11 import http11downloadhandler httpdownloadhandler file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/core/downloader/handlers/http11.py", line 15, in scrapy.xlib.tx import agent, proxyagent, responsedone, \ file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/xlib/tx/init.py", line 6, in . import client, endpoints file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/xlib/tx/client.py", line 37, in .endpoints import tcp4clientendpoint, ssl4clientendpoint file "/library/python/2.7/site-packages/scrapy-0.22.2-py2.7.egg/scrapy/xlib/tx/endpoints.py", line 222, in interfaces.iprocesstransport, '_process')): file "/system/library/frameworks/python.framework/versions/2.7/extras/lib/python/zope/interface/declarations.py", line 495, in call raise typeerror("can't utilize implementer classes. utilize 1 of " typeerror: can't utilize implementer classes. utilize 1 of class-declaration functions instead.

try updating zope , run code

sudo pip install --upgrade zope.interface

or

sudo easy_install --upgrade zope.interface

python-2.7 scrapy dmoz

Comments

Popular posts from this blog

php - Android app custom user registration and login with cookie using facebook sdk -

django - Access session in user model .save() -

php - .htaccess Multiple Rewrite Rules / Prioritizing -