2016-08-20 214 views
1

在IntelliJ IDEA中,我试图用spark代码执行一个Java文件 - 这会导致java.lang.VerifyError。Intellij IDEA:运行Spark代码导致java.lang.VerifyError

堆栈跟踪如下:

ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 2) java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585)

我错误的理解 - 有一个在客户端斯卡拉斯卡拉版本(的IntelliJ)和星火1.6.1的版本阶一些版本冲突。

相同的代码前一天工作正常,但现在给出了上述错误。令人惊讶的是,相同的代码在Eclipse上执行得很好。

完整堆栈跟踪:

16/08/20 22:25:39 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 2) 
java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585) 
    at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:257) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:65) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
16/08/20 22:25:39 ERROR Executor: Exception in task 1.0 in stage 1.0 (TID 3) 
java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585) 
    at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:257) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:65) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
16/08/20 22:25:39 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main] 
java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585) 
    at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:257) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:65) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
16/08/20 22:25:39 ERROR SparkUncaughtExceptionHandler: [Container in shutdown] Uncaught exception in thread Thread[Executor task launch worker-1,5,main] 
java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585) 
    at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:257) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:65) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
16/08/20 22:25:39 INFO SparkContext: Invoking stop() from shutdown hook 
16/08/20 22:25:39 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 3, localhost): java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585) 
    at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:257) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:65) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 

16/08/20 22:25:39 ERROR TaskSetManager: Task 1 in stage 1.0 failed 1 times; aborting job 
16/08/20 22:25:39 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/08/20 22:25:39 INFO TaskSetManager: Lost task 0.0 in stage 1.0 (TID 2) on executor localhost: java.lang.VerifyError ((class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function) [duplicate 1] 
16/08/20 22:25:39 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/08/20 22:25:39 INFO TaskSchedulerImpl: Cancelling stage 1 
16/08/20 22:25:39 INFO DAGScheduler: ShuffleMapStage 1 (map at LinearRegression.scala:161) failed in 0.406 s 
16/08/20 22:25:39 INFO DAGScheduler: Job 1 failed: first at LinearRegression.scala:163, took 0.437687 s 

org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 3, localhost): java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585) 
    at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:257) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:65) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 

    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858) 
    at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1328) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) 
    at org.apache.spark.rdd.RDD.take(RDD.scala:1302) 
    at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1342) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) 
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) 
    at org.apache.spark.rdd.RDD.first(RDD.scala:1341) 
    at org.apache.spark.ml.regression.LinearRegression.train(LinearRegression.scala:163) 
    at org.apache.spark.ml.regression.LinearRegression.train(LinearRegression.scala:67) 
    at org.apache.spark.ml.Predictor.fit(Predictor.scala:90) 
    at com.datarpm.insights.ensemble.scorer.PipelineEnsembleScorerTest.regressionTests(PipelineEnsembleScorerTest.java:73) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) 
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) 
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) 
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) 
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) 
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) 
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:69) 
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:48) 
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) 
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) 
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) 
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) 
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) 
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) 
    at org.junit.runners.ParentRunner.run(ParentRunner.java:292) 
    at org.junit.runner.JUnitCore.run(JUnitCore.java:157) 
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117) 
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:42) 
    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:262) 
    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:84) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) 
Caused by: java.lang.VerifyError: (class: org/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificOrdering, method: tryCompare signature: (Ljava/lang/Object;Ljava/lang/Object;)Lscala/Some;) Wrong return type in function 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass.generate(Unknown Source) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:138) 
    at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:588) 
    at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:585) 
    at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:257) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:65) 
    at org.apache.spark.sql.execution.Sort$$anonfun$1.apply(Sort.scala:64) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$21.apply(RDD.scala:728) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) 
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 
+0

此问题是由于分析Maven依赖关系树确定的spark文件(2.11)的版本不正确而导致的。但是,无法确定为什么它在Eclipse中工作正常,但出现故障IntelliJ –

+0

嗨,我得到了同样的异常,但我已经用Scala 2.11.8 spark 2.0。你能否给你的pom文件提供正确的工作文档? – dmreshet

+1

确保你的所有依赖jar都是用2.11.8编译的,并将模块重新导入IntelliJ。 我也清除了M2本地存储库,但它可能不是必需的。 –

回答

1

我几乎同样的问题。根本原因是级联和火花催化剂都具有依赖性。其结果是火花用于意外JANINO库版本 Pleese看到MVN依赖性:树-Dverbose命令输出

[INFO] +- org.apache.spark:spark-sql_2.11:jar:2.0.0:compile 
[INFO] | +- org.apache.spark:spark-catalyst_2.11:jar:2.0.0:compile 
[INFO] | | +- (org.codehaus.janino:janino:jar:2.7.8:compile - omitted for conflict with 2.7.5)  
[INFO] +- cascading:cascading-core:jar:2.6.1:compile 
[INFO] | \- org.codehaus.janino:janino:jar:2.7.5:compile 
[INFO] |  \- org.codehaus.janino:commons-compiler:jar:2.7.5:compile 
[INFO] +- cascading:cascading-core:jar:tests:2.6.1:compile 
[INFO] | \- (org.codehaus.janino:janino:jar:2.7.5:compile - omitted for duplicate) 
0

对我来说,这个问题是相互矛盾的罐子。
我有一个内部项目的依赖jar编译与旧的斯卡拉版本2.11.4。但Spark是用2.11.8编译的。

通过重新编译所有相关的jar以使用相同的Java和Scala目标版本解决了我的问题。