该网站确实有一个隐藏的身份验证令牌,但docs似乎暗示我不需要在此覆盖默认值,只需要传递用户名和密码即可。Scrapy登录失败
寻找在网络选项卡,我注意到,除了发布身份验证令牌之外,还有许多cookie。不知道我是否必须在那里做任何事情。
我的代码,从不同的其他人的以前的尝试鹅卵石:
The website does have a hidden authentication token, but the [docs][1] seem to suggest I don't need to override the default here, and only need to pass the username and password.
寻找在网络选项卡,我也注意到,除了张贴的认证令牌,也有无数的饼干。不知道我是否必须在那里做任何事情。
我的代码,从不同的其他人的以前的尝试鹅卵石:
import scrapy
from scrapy.selector import Selector
from scrapy import Spider
from scrapy.contrib.spiders.init import InitSpider
from scrapy.spider import BaseSpider
from scrapy.http import Request, FormRequest
from scrapy import log
from scrapy.crawler import CrawlerProcess
from dealinfo.items import DealinfoItem
class DealinfoSpider(scrapy.Spider):
name = 'dealinfo'
allowed_domains = ['dealinfo.com']
#login_page = 'https://dealinfo.com/users/sign_in'
start_urls = 'https://dealinfo.com/organizations/xxxx/member_landing'
def start_requests(self):
return [Request(url='https://dealinfo.com/users/sign_in', callback=self.login)]
def login(self, response):
return FormRequest(
'https://dealinfo.com/users/sign_in',
formdata={
'user[email]':'xxxxx',
'user[password]':'xxxxx'
},
callback=self.after_login)
def after_login(self, response):
if "authentication failed":
self.log("Login failed", level=log.ERROR)
return
self.log('Login Successful. Parsing all other URLs')
for url in self.start_urls:
yield self.make_requests_from_url(url)
def parse(self, response):
deal_list = Selector(response).xpath('//table[@id="deal_list"]/tbody[@class="deal-list__row"]/tr[@class="deal"]')
for deal_row in deal_list:
item = DealinfoItem()
item['capital_seeking'] = deal_row.xpath('td[2]/text()').extract()
yield item`
你应该尝试FormRequest.from_response() – Verz1Lka