1
我最近开始使用Spark-Cassandra集群(Master + 3 Workers)系统上的zeppelin来使用MLlib库运行简单的机器学习算法。Spark-Cassandra系统上的Zeppelin问题:Classnotfoundexception
这里是我装到飞艇库:
%dep
z.load("com.datastax.spark:spark-cassandra-connector_2.10:1.4.0-M1")
z.load("org.apache.spark:spark-core_2.10:1.4.1")
z.load("com.datastax.cassandra:cassandra-driver-core:2.1.3")
z.load("org.apache.thrift:libthrift:0.9.2")
z.load("org.apache.spark:spark-mllib_2.10:1.4.0")
z.load("cassandra-clientutil-2.1.3.jar")
z.load("joda-time-2.3.jar")
我试图实现对线性回归的脚本。但是,当我运行它,我得到了以下错误消息:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 192.xxx.xxx.xxx): java.lang.ClassNotFoundException: $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:344)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:66)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
...
的问题是,该脚本不使用火花提交脚本的问题,这让我感到困惑运行。
下面是一些代码,我试图执行:
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.cql.CassandraConnector
import org.apache.spark.mllib.regression.{LinearRegressionWithSGD, LinearRegressionModel, LabeledPoint}
import org.apache.spark.rdd.RDD
sc.stop()
val conf = new SparkConf(true).set("spark.cassandra.connection.host", "xxx.xxx.xxx.xxx").setMaster("spark://xxx.xxx.xxx.xxx:7077").setAppName("DEMONSTRATION")
val sc = new SparkContext(conf)
case class Fact(numdoc:String, numl:String, year:String, creator:Double, date:Double, day:Double, user:Double, workingday:Double, total:String)
val data= sc.textFile("~/Input/Data.csv »)
val parsed = data.filter(!_.isEmpty).map {row =>
val splitted = row.split(",")
val Array(nd, nl, yr)=splitted.slice(0,3)
val Array(cr, dt, wd, us, wod)=splitted.slice(3,8).map(_.toDouble)
Fact (nd, nl, yr, cr, dt, wd, us, wod, splitted(8))
}
val class2id = parsed.map(_.total.toDouble).distinct.collect.zipWithIndex.map{case (k,v) => (k, v.toDouble)}.toMap
val id2class = class2id.map(_.swap)
val parsedData = parsed.map { i => LabaledPoint(class2id(i.total.toDouble), Array(i.creator,i.date,i.day,i.workingday))
val model: LinearRegressionModel = LinearRegressionWithSGD.train(parsedData, 3)
预先感谢您!
你可以分享线性回归的脚本吗?对我而言,看起来好像某些用户代码类未正确装载到群集中。 –
嗨直到,当Spark尝试将映射方法应用于数据集时,问题就开始了。我认为工作人员无法加载额外的库,所以我尝试使用'addjar'和'setJars'方法并设置'SPARK_CLASSPATH'变量,但不幸的是这也不起作用。 – Med3