I am new to programming. Before that, I studied a little Python. Passing lessons on Scrapy, I encountered the following example:
# -*- coding: utf-8 -*- import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' allowed_domains = ['quotes.toscrape.com'] start_urls = ['http://quotes.toscrape.com/'] def parse(self, response): quotes = response.xpath('//*[@class="quote"]') for quote in quotes: text = quote.xpath('.//*[@class="text"]/text()').extract_first() author = quote.xpath('.//*[@itemprop="author"]/text()').extract_first() tags = quote.xpath('.//*[@itemprop="keywords"]/@content').extract_first() print '\n' print text print author print tags print '\n' When I run the scrapy crawl quotes command in the console, I get the following errors:
`\u201c 2019-01-06 19:44:57 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: quotes_spider) 2019-01-06 19:44:57 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 2.7.15 |Anaconda, Inc.| (default, Dec 10 2018, 21:57:18) [MSC v.1500 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2p 14 Aug 2018), cryptography 2.4.2, Platform Windows-7-6.1.7601-SP1 2019-01-06 19:44:57 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'quotes_spider.spiders', 'SPIDER_MODULES': ['quotes_spider.spiders'], 'BOT_NAME': 'qu otes_spider'} 2019-01-06 19:44:58 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.logstats.LogStats', 'scrapy.extensions.telnet.TelnetConsole', 'scrapy.extensions.corestats.CoreStats'] 2019-01-06 19:44:58 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2019-01-06 19:44:58 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2019-01-06 19:44:58 [scrapy.middleware] INFO: Enabled item pipelines: [] 2019-01-06 19:44:58 [scrapy.core.engine] INFO: Spider opened 2019-01-06 19:44:58 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-01-06 19:44:58 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 2019-01-06 19:44:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/> (referer: None) 2019-01-06 19:44:59 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com/> (referer: None) Traceback (most recent call last): File "f:\downloads\anaconda2.7\downloads\lib\site-packages\twisted\internet\defer.py", line 654, in _runCallbacks current.result = callback(current.result, *args, **kw) File "F:\quotes_spider\quotes_spider\spiders\quotes.py", line 18, in parse print text File "f:\downloads\anaconda2.7\downloads\lib\encodings\cp866.py", line 12, in encode return codecs.charmap_encode(input,errors,encoding_map) UnicodeEncodeError: 'charmap' codec can't encode character u'\u201c' in position 0: character maps to <undefined> 2019-01-06 19:44:59 [scrapy.core.engine] INFO: Closing spider (finished) 2019-01-06 19:44:59 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 218, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 2333, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 1, 6, 17, 44, 59, 581000), 'log_count/DEBUG': 2, 'log_count/ERROR': 1, 'log_count/INFO': 7, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/UnicodeEncodeError': 1, 'start_time': datetime.datetime(2019, 1, 6, 17, 44, 58, 672000)} 2019-01-06 19:44:59 [scrapy.core.engine] INFO: Spider closed (finished) ` The structure of my project is as follows:
Question:
Why do these errors occur and what am I doing wrong?
