我想构造一个接收解析器作为参数的类,并在每一行使用这个解析器。以下是您可以粘贴到spark-shell
的最简单示例。通用解析器类`任务不可序列化'
import scala.util.{Success,Failure,Try}
import scala.reflect.ClassTag
class Reader[T : ClassTag](makeParser:() => (String => Try[T])) {
def read(): Seq[T] = {
val rdd = sc.parallelize(Seq("1","2","oops","4")) mapPartitions { lines =>
// Since making a parser can be expensive, we want to make only one per partition.
val parser: String => Try[T] = makeParser()
lines flatMap { line =>
parser(line) match {
case Success(record) => Some(record)
case Failure(_) => None
}
}
}
rdd.collect()
}
}
class IntParser extends (String => Try[Int]) with Serializable {
// There could be an expensive setup operation here...
def apply(s: String): Try[Int] = Try { s.toInt }
}
然而,当我尝试运行像new Reader(() => new IntParser).read()
(哪种类型的检查就好了),我得到了可怕的错误org.apache.spark.SparkException: Task not serializable
有关关闭。
为什么会出现错误,并且有没有办法重新设计上述以避免这种情况(同时保留Reader
通用)?
奇怪。该函数只关闭'makeParser',但是'()=>新的IntParser'应该是可序列化的。如果用'parser'作为参数替换传递的'makeParser'会发生什么? –
@AlexeyRomanov如果我只是把'parser'作为'Reader [T]'的参数,我仍然会得到相同的消息(稍微不同的跟踪,但仍然与闭包有关) – Alec
@Alec - 快速修复移动行解析器:String =>在val rdd =之前尝试[T] = makeParser().. – Knight71