2017-02-22 87 views
1

我想通过流式传输从Twitter获取数据。 我正在获取twt varibale中的数据。Scala(Zeppeline):任务不可序列化

val ssc = new StreamingContext(sc, Seconds(60)) 
val tweets = TwitterUtils.createStream(ssc, None, Array("#hadoop", "#bigdata", "#spark", "#hortonworks", "#HDP")) 
//tweets.saveAsObjectFiles("/models/Twitter_files_", ".txt") 
case class Tweet(createdAt:Long, text:String, screenName:String) 

val twt = tweets.window(Seconds(60)) 
//twt.foreach(status => println(status.text()) 

import sqlContext.implicits._ 

val temp = twt.map(status=> 
    Tweet(status.getCreatedAt().getTime()/1000,status.getText(), status.getUser().getScreenName()) 
    ).foreachRDD(rdd=> 
     rdd.toDF().registerTempTable("tweets") 
    ) 
twt.print 

ssc.start() 

这里是错误:

org.apache.spark.SparkException: Task not serializable 
     at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304) 
     at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294) 
     at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) 
     at org.apache.spark.SparkContext.clean(SparkContext.scala:2032) 
     at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:528) 
     at org.apache.spark.streaming.dstream.DStream$$anonfun$map$1.apply(DStream.scala:528) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) 
     at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) 
     at org.apache.spark.SparkContext.withScope(SparkContext.scala:709) 
     at org.apache.spark.streaming.StreamingContext.withScope(StreamingContext.scala:266) 

Caused by: java.io.NotSerializableException: org.apache.spark.streaming.StreamingContext 

回答

0

Tweet类不是Serializable,所以扩展该。

这是一个普遍星火问题,堆栈告诉你到底是什么努力,因为星火1.3序列化,我相信

+0

我已经加入这样的:案例类资料Tweet(createdAt:长,文本:字符串,屏幕名:字符串)扩展了Serializable。这是正确的方式吗?因为它给了我同样的错误。 – Bond

+0

是的,这是正确的。你能展示更多的堆栈跟踪吗? –

+0

http://i.imgur.com/eAT3tCr.png – Bond

相关问题