最新消息:20210816 当前crifan.com域名已被污染,为防止失联,请关注(页面右下角的)公众号

【已解决】运行Scrapy项目结果出错:KeyError: ‘Spider not found: manta’

Python crifan 12736浏览 0评论

【问题】

在折腾:

【记录】用Scrapy抓取manta.com

期间,运行scrapy项目,结果出错:

E:\Dev_Root\python\Scrapy\manta\manta>scrapy crawl manta -o respBody -t json
2013-05-24 22:51:47+0800 [scrapy] INFO: Scrapy 0.16.2 started (bot: EchO!/2.0)
2013-05-24 22:51:48+0800 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebServ
ice, CoreStats, SpiderState
2013-05-24 22:51:48+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware,
UserAgentMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, Chunked
TransferMiddleware, DownloaderStats
2013-05-24 22:51:48+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMidd
leware, UrlLengthMiddleware, DepthMiddleware
2013-05-24 22:51:48+0800 [scrapy] DEBUG: Enabled item pipelines:
Traceback (most recent call last):
  File "E:\dev_install_root\Python27\lib\runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "E:\dev_install_root\Python27\lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File "E:\dev_install_root\Python27\lib\site-packages\scrapy\cmdline.py", line 156, in <module>
    execute()
  File "E:\dev_install_root\Python27\lib\site-packages\scrapy\cmdline.py", line 131, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "E:\dev_install_root\Python27\lib\site-packages\scrapy\cmdline.py", line 76, in _run_print_help
    func(*a, **kw)
  File "E:\dev_install_root\Python27\lib\site-packages\scrapy\cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "E:\dev_install_root\Python27\lib\site-packages\scrapy\commands\crawl.py", line 43, in run
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "E:\dev_install_root\Python27\lib\site-packages\scrapy\spidermanager.py", line 43, in create
    raise KeyError("Spider not found: %s" % spider_name)
KeyError: ‘Spider not found: manta’

E:\Dev_Root\python\Scrapy\manta\manta>

【解决过程】

1.后来发现,之前也遇到类似的错误:

【记录】折腾Scrapy的Tutorial

而且已经解决了,但是此处的错误,不是之前的那种。

2.参考:

Scrapy spider not found error

去看看mantaspider.py中的内容为:

from scrapy.contrib.spiders import CrawlSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.crawler import CrawlerProcess
from scrapy.http import Request
import re

class spider(CrawlSpider):

	name="mantaspider"

	start_urls=["http://www.manta.com/mb_44_A0139_01/radio_television_and_publishers_advertising_representatives/alabama"]

	def parse(self,response):

		
		hxs=HtmlXPathSelector(response)

		f=open("responsebody","w")
		f.write(response.body)
		f.close()

		next_href_list=hxs.select("//a[@class='nextYes']/@href")

		is_next=False

		#print response.body

		if len(next_href_list):
			is_next=True
			next_href=next_href_list[0].extract()
			print "*****************************"
			print next_href
			print "*****************************"
			Request(str(next_href),self.parse)


		internal_page_urls_list=hxs.select("//a[@class='url']/@href")

		for internal_page_url in internal_page_urls_list:
			internal_page_url=internal_page_url.extract()
			Request(str(internal_page_url),self.parse_internal)

	def parse_internal(self,response):

		try:
			item=[]
			item["title"]=hxs.select("//h1[@class='profile-company_name']/text()").extract()
			print "*****************************"
			print item
			print "*****************************"

		except:
			print "All items not found here"

所以,得知了crawler的名字是mantaspider,所以换个名字试试,结果至少就可以正常运行了:

E:\Dev_Root\python\Scrapy\manta\manta>scrapy crawl mantaspider -o respBody -t json

2013-05-24 22:57:22+0800 [scrapy] INFO: Scrapy 0.16.2 started (bot: EchO!/2.0)

2013-05-24 22:57:23+0800 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebServ

ice, CoreStats, SpiderState

2013-05-24 22:57:23+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware,

UserAgentMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, Chunked

TransferMiddleware, DownloaderStats

2013-05-24 22:57:23+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMidd

leware, UrlLengthMiddleware, DepthMiddleware

2013-05-24 22:57:23+0800 [scrapy] DEBUG: Enabled item pipelines:

2013-05-24 22:57:23+0800 [mantaspider] INFO: Spider opened

2013-05-24 22:57:23+0800 [mantaspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2013-05-24 22:57:23+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023

2013-05-24 22:57:23+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080

2013-05-24 22:57:26+0800 [mantaspider] DEBUG: Received cookies from: <200 http://www.manta.com/mb_44_A0139_01/radio_tele

vision_and_publishers_advertising_representatives/alabama>

        Set-Cookie: SPSI=0ec1404bb15c17798c2f9f6663a83413 ; path=/; domain=.manta.com

        Set-Cookie: adOtr=obsvl ; expires=Tue, 21 May 2013 14:55:14 GMT; path=/; domain=.manta.com

        Set-Cookie: UTGv2=D-h42844ef562b838353f6aaddae89054fa846 ; expires=Sat, 24 May 2014 14:55:14 GMT; path=/; domain

=.manta.com

        Set-Cookie: abtest_v=quick_claim&quick_claim&static_page&ppc_landing_ads.mantahas&profile_stats&advertise_your_p

rofile&version&97&site_wide&member_service.ms3&leadgen&leadgen.v3&upsell_test&upsell_control&ppc_login&ppc_login.ppc2&ad

sense&b&afs_split_test&afs_split_test.treatmentc&upsellbutton&upsellbutton.b&social_share&social_share_lite&status_activ

ity_stream_split_test&status_activity_stream_split_test.treatment2&activity_stream_split_test&activity_stream_split_test

.treatmenta&mobile_adsense&d; domain=.manta.com; path=/

        Set-Cookie: abtest_v=quick_claim&quick_claim&static_page&ppc_landing_ads.mantahas&profile_stats&advertise_your_p

rofile&version&97&site_wide&member_service.ms3&leadgen&leadgen.v3&upsell_test&upsell_control&ppc_login&ppc_login.ppc2&ad

sense&b&afs_split_test&afs_split_test.treatmentc&upsellbutton&upsellbutton.b&social_share&social_share_lite&status_activ

ity_stream_split_test&status_activity_stream_split_test.treatment2&activity_stream_split_test&activity_stream_split_test

.treatmenta&mobile_adsense&d; path=/

        Set-Cookie: member_session=V2%5BE%5DRmRlaHFxbVVtNldINkNXeml1TTBGZz09JlpXeVhjbEhXRmpDSFlCZFhWUkMxZXhPVmh0dk1zQVRq

eTFHR29SZXk5N2NaYnN3dVUzTEc4b3ZhZVZPcDdEV0N2YVJpUk40bnowbXpIYng3ZWxYVkFKYTMrKzdIV0h6MnRCQ0RuYjhtZjhhZk9yWGtESTlJK1g1MXVY

blYvTnZxZCtLaVZ6VXR4TlJ2dmRLRklBVmsrN2NKTmdrS0htODRXRStzK0NheVFNaFZsVjE0ZnRSd0VsUG02NkVqS3phS3dyUVVvUGcwc3Ezd1d3VGkyK0ZL

aXRZME9BT0ZLN1hWdG00aE9RcE5YSm89; domain=.manta.com; path=/; expires=Thu, 22-Aug-2013 14:55:14 GMT

        Set-Cookie: refer_id=0000; domain=.manta.com; path=/

        Set-Cookie: refer_id_persistent=0000; domain=.manta.com; path=/; expires=Sun, 24-May-2015 14:55:14 GMT

        Set-Cookie: cust_id=1369407314.875430-629; domain=.manta.com; path=/; expires=Sun, 24-May-2015 14:55:14 GMT

2013-05-24 22:57:26+0800 [mantaspider] DEBUG: Crawled (200) <GET http://www.manta.com/mb_44_A0139_01/radio_television_an

d_publishers_advertising_representatives/alabama> (referer: None)

*****************************

http://www.manta.com/mb_44_A0139_01/radio_television_and_publishers_advertising_representatives/alabama?pg=2

*****************************

2013-05-24 22:57:26+0800 [mantaspider] INFO: Closing spider (finished)

2013-05-24 22:57:26+0800 [mantaspider] INFO: Dumping Scrapy stats:

        {‘downloader/request_bytes’: 425,

         ‘downloader/request_count’: 1,

         ‘downloader/request_method_count/GET’: 1,

         ‘downloader/response_bytes’: 28268,

         ‘downloader/response_count’: 1,

         ‘downloader/response_status_count/200’: 1,

         ‘finish_reason’: ‘finished’,

         ‘finish_time’: datetime.datetime(2013, 5, 24, 14, 57, 26, 890000),

         ‘log_count/DEBUG’: 8,

         ‘log_count/INFO’: 4,

         ‘response_received_count’: 1,

         ‘scheduler/dequeued’: 1,

         ‘scheduler/dequeued/memory’: 1,

         ‘scheduler/enqueued’: 1,

         ‘scheduler/enqueued/memory’: 1,

         ‘start_time’: datetime.datetime(2013, 5, 24, 14, 57, 23, 825000)}

2013-05-24 22:57:26+0800 [mantaspider] INFO: Spider closed (finished)

E:\Dev_Root\python\Scrapy\manta\manta>

 

【总结】

在运行Scrapy项目时,不是:

scrapy crawl 文件夹名字 -o respBody -t json

而是:

先找到,形如xxxxSpider.py的文件,其中有类似于:

class spider(xxxxSpider):

	name="yourRealNameSipder"

的代码,其中的yourRealNameSipder,才是爬虫的名字,然后运行:

scrapy crawl yourRealNameSipder -o respBody -t json 

才可以。

转载请注明:在路上 » 【已解决】运行Scrapy项目结果出错:KeyError: ‘Spider not found: manta’

发表我的评论
取消评论

表情

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址
89 queries in 0.202 seconds, using 22.10MB memory