2017-09-06 68 views
0

计算偏我有下面一段代码的问题:使用spark.sql和Cloudant

def skewTemperature(cloudantdata,spark): 
    return spark.sql("""SELECT (1/count(temperature)) * (sum(POW(temperature-%s,3))/pow(%s,3)) as skew from washing""" %(meanTemperature(cloudantdata,spark),sdTemperature(cloudantdata,spark))).first().skew 

meanTemperaturesdTemperature都工作正常,但与上面的查询我收到以下错误:

Py4JJavaError: An error occurred while calling o2849.collectToPython. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 315.0 failed 10 times, most recent failure: Lost task 3.9 in stage 315.0 (TID 1532, yp-spark-dal09-env5-0045): java.lang.RuntimeException: Database washing request error: {"error":"too_many_requests","reason":"You've exceeded your current limit of 5 requests per second for query class. Please try later.","class":"query","rate":5 

有谁知道如何解决这个问题?

+0

请问清楚问题不清楚 – Kondal

回答

0

错误表明您超出了查询类的Cloudant API调用阈值,对于您正在使用的服务计划,该阈值似乎为5 /秒。 一个潜在的解决方案是通过定义jsonstore.rdd.partitions配置属性来限制分区的数目,如示于下火花2例如:

spark = SparkSession\  
     .builder\  
     .appName("Cloudant Spark SQL Example in Python using dataframes")\ 
     .config("cloudant.host","ACCOUNT.cloudant.com")\  
     .config("cloudant.username", "USERNAME")\  
     .config("cloudant.password","PASSWORD")\  
     .config("jsonstore.rdd.partitions", 5)\  
     .getOrCreate() 

开始用5和工作的方式下降到1应的误差仍然存在。该设置基本上限制了将向Cloudant发送多少个并发请求。如果设置为1不能解决问题,则可能需要考虑升级到具有较大阈值的服务计划。