我觉得spark-csv
会有所帮助,但这里是纯粹的Scala的方法。
当你说“空白空间”时,我假设你的字面意思是那里有一些空白,并且该行不仅以逗号结尾。
case class Doctor(age:Int, part:String,day:String,value:Double)
val line = "9,elbow,Mon Aug 15 00:00:00 EDT 3399, "
val data = line.split(",").map(_.trim).map {
case "" => "0.0"
case (x:String) => x
}
val doc = Doctor(data(0).toInt, data(1), data(2), data(3).toDouble)
输出
data: Array[String] = Array(9, elbow, Mon Aug 15 00:00:00 EDT 3399, 0.0)
doc: Doctor(9,elbow,Mon Aug 15 00:00:00 EDT 3399,0.0)
至于星火而言......这使得一个RDD[Doctor]
case class Doctor(age:Int, part:String,day:String,value:Double)
sc.textFile(fileName).map { line =>
val data = line.split(",").map(_.trim).map {
case "" => "0.0"
case (x:String) => x
}
Doctor(data(0).toInt, data(1), data(2), data(3).toDouble)
}
你能做到这一点上RDD? –
当然。类似于'sc.textFile(“file.txt”)。map {line => ...}'? –
这就是我一直在尝试,但我不能保留所有元素,并删除空字符串,因为它是一个双,我们不能做double.isEmpty() –