2017-07-29 72 views
2

我在泊坞窗火花测试应用程序的所有火花笔记本时,Scala代码是:造成的:java.lang.ClassCastException:人不能被转换为个人

val p = spark.sparkContext.textFile ("../Data/person.txt") 
val pmap = p.map (_.split (",")) 
pmap.collect() 

输出为: Array(Array(Barack, Obama, 53), Array(George, Bush, 68), Array(Bill, Clinton, 68))

case class Person (first_name:String,last_name: String,age:Int) 
val personRDD = pmap.map (p => Person (p(0), p(1), p(2).toInt)) 
val personDF = personRDD.toDF 
personDF.collect() 

错误消息是以上:

Name: org.apache.spark.SparkException 
Message: Job aborted due to stage failure: Task 1 in stage 12.0 failed 1 times, most recent failure: Lost task 1.0 in stage 12.0 (TID 17, localhost, executor driver): java.lang.ClassCastException: $line145.$read$$iw$$iw$Person cannot be cast to $line145.$read$$iw$$iw$Person 
    ................ 
Caused by: java.lang.ClassCastException: Person cannot be cast to Person 

驻f行为,我试图用spark-shell运行这段代码,这段代码正确运行。我推测上面的错误消息与码头环境有关,但不是代码本身。 此外,我试图表明personRDD,有:

personRDD.collect 

我得到的错误信息:

org.apache.spark.SparkDriverExecutionException: Execution error 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1186) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062) 
    at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1354) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) 
    at org.apache.spark.rdd.RDD.take(RDD.scala:1327) 
    ... 37 elided 
Caused by: java.lang.ArrayStoreException: [LPerson; 
    at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:90) 
    at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:2043) 
    at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:2043) 
    at org.apache.spark.scheduler.JobWaiter.taskSucceeded(JobWaiter.scala:59) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1182) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 

我无法找出为什么这个问题产生的原因。有人能给我一些线索吗?谢谢。

+0

你如何直接使用SparkSQL将案例类应用于txt文件? –

+0

你的代码完美地运行在我的:)可能是你在代码中的其他地方做了一些讨厌的东西。您是否将案例类放在执行代码之外? –

+0

我用旧的代码来测试docker环境,它正确运行在spark-shell中,但现在不是。 –

回答

1

由于cricket_007在他的评论中建议使用sqlContext,因此您应该使用sparkSQL

header

first_name,last_name,age 
Barack,Obama,53 
George,Bush,68 
Bill,Clinton,68 

,你可以做以下

val df = sqlContext.read 
    .format("com.databricks.spark.csv") 
    .option("header", true) 
    .load("../Data/person.txt") 

给定的输入数据文件,以获得dataframe作为

+----------+---------+---+ 
|first_name|last_name|age| 
+----------+---------+---+ 
|Barack |Obama |53 | 
|George |Bush  |68 | 
|Bill  |Clinton |68 | 
+----------+---------+---+ 

schema作为

产生
root 
|-- first_name: string (nullable = true) 
|-- last_name: string (nullable = true) 
|-- age: string (nullable = true 

您可以定义一个schema和应用schema作为

val schema = StructType(Array(StructField("first_name", StringType, true), StructField("last_name", StringType, true), StructField("age", IntegerType, true))) 

val df = sqlContext.read 
    .format("com.databricks.spark.csv") 
    .option("header", true) 
    .option("inferSchema", "true") 
    .schema(schema) 
    .load("/home/anahcolus/IdeaProjects/scalaTest/src/test/resources/t1.csv") 

你应该有schema作为

root 
|-- first_name: string (nullable = true) 
|-- last_name: string (nullable = true) 
|-- age: integer (nullable = true) 

如果您还没有header在你的文件,然后你可以删除header option

相关问题