我们正在构建简单的Streaming应用程序,该应用程序使用HBase RDD与传入的DStream进行连接。 示例代码:Apache Spark:从检查点恢复状态期间的NPE
val indexState = sc.newAPIHadoopRDD(
conf,
classOf[TableInputFormat],
classOf[ImmutableBytesWritable],
classOf[Result]).map { case (rowkey, v) => //some logic}
val result = dStream.transform { rdd =>
rdd.leftOuterJoin(indexState)
}
它工作正常,但是当我们启用检查点对的StreamingContext ,让应用程序从先前创建的检查点恢复, 它总是抛出NullPointerException异常。
ERROR streaming.StreamingContext: Error starting the context, marking it as stopped
java.lang.NullPointerException
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFormat.java:119)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
有没有人遇到同样的问题? 版本:
- 星火1.6.x的
- Hadoop的2.7.x
谢谢!
当你说“以前创建检查点”意思流作业停止并重新提交? – ImDarrenG