2017-07-14 101 views
1

我需要将填充了var值的行列中的SQL行转换为向量。我使用下面Spark scala将rdd sql行转换为向量

val df = sqlContext.sql("SELECT age,gender FROM test.test2") 
val rows: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = df.rdd 
val doubVals = rows.map{ row => row.getDouble(0) } 
val vector = Vectors.dense{ doubVals.collect} 

步骤,但它给了很多例外的喜欢ClassNotFoundException的

scala> val vector = Vectors.dense{ doubVals.collect} 
WARN 2017-07-14 02:12:09,477 org.apache.spark.scheduler.TaskSetManager: 
Lost task 0.0 in stage 2.0 (TID 7, 192.168.110.200): 
java.lang.ClassNotFoundException: 



    $line31.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw 
    $$iw$$iw$$iw$$iw$$anonfun$1 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
    at java.lang.Class.forName0(Native Method) 
    at java.lang.Class.forName(Class.java:348) 
    at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67) 
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826) 
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422) 
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) 
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:86) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:748) 

    [Stage 2:>               (0 + 
3)/7]ERROR 2017-07-14 02:12:09,787 
    org.apache.spark.scheduler.TaskSetManager: Task 2 in stage 2.0 failed 4 
    times; aborting job 
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 
in stage 2.0 failed 4 times, most recent failure: Lost task 2.3 in stage 
    2.0 (TID 21, 192.168.110.200): java.lang.ClassNotFoundException: $anonfun$1 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357) 
    at java.lang.Class.forName0(Native Method) 
    at java.lang.Class.forName(Class.java:348) 
    at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67) 
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826) 
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245) 
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027) 
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535) 
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422) 
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75) 
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:86) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:748) 

,但它给了我的异常:ClassNotFoundException的

能否请你帮我解决这个问题?

+0

您可以发布完整的错误消息?并创建矢量后您的代码? –

+0

ok会更新答案 – Amalo

+0

您是否错过了这个:'import org.apache.spark.mllib.linalg.Vectors' – philantrovert

回答

0

看看下面的步骤(它允许我)

scala> val df = Seq(2.0,3.0,3.2,2.3,1.2).toDF("col") 
df: org.apache.spark.sql.DataFrame = [col: double] 

scala> import org.apache.spark.mllib.linalg.Vectors 
import org.apache.spark.mllib.linalg.Vectors 

scala> val rows = df.rdd 
rows: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[3] at rdd at <console>:31 

scala> val doubVals = rows.map{ row => row.getDouble(0) } 
doubVals: org.apache.spark.rdd.RDD[Double] = MapPartitionsRDD[4] at map at <console>:33 

scala> val vector = Vectors.dense{ doubVals.collect} 
vector: org.apache.spark.mllib.linalg.Vector = [2.0,3.0,3.2,2.3,1.2] 

这应该给提示调试你的

+0

这个怎么办?我的错误在哪里? – Amalo

+0

我的答案告诉你,如果你按照这样做,你将不会有错误。你是否也一样? –

+0

我遵循这些步骤,并出现相同的错误 – Amalo

相关问题