2016-06-21 181 views
2

我正在使用的Spark 1.6.1一个ParamGridBuilder和2.0线性回归scala.MatchError:

val paramGrid = new ParamGridBuilder() 
    .addGrid(lr.regParam, Array(0.1, 0.01)) 
    .addGrid(lr.fitIntercept) 
    .addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0)) 
    .build() 

错误时scala.MatchError是

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 57.0 failed 1 times, most recent failure: Lost task 0.0 in stage 57.0 (TID 257, localhost): 
scala.MatchError: [280000,1.0,[2400.0,9373.0,3.0,1.0,1.0,0.0,0.0,0.0]] (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema) 

Full code

的问题是在这种情况下如何使用ParamGridBuilder

回答

3

这里的问题是输入模式不是ParamGridBuilder。价格列以整数形式加载,而LinearRegression期望加倍。您可以通过显式铸造柱所需类型修复:

val houses = sqlContext.read.format("com.databricks.spark.csv") 
    .option("header", "true") 
    .option("inferSchema", "true") 
    .load(...) 
    .withColumn("price", $"price".cast("double")) 
+0

感谢,错过了从因为有他们为什么铸造翻番 – oluies

+0

不客气无可奉告最初的例子。它应该基于模式进行验证,不会在作业中发生异常。不幸的是,ML充满了这样的故障。 – zero323

+1

似乎工作https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/1221303294178191/1275177332049116/6190062569763605/latest.html – oluies