1
根据Spark源代码注释。SparkContext并行化懒惰行为 - 不明原因
SparkContext.scala有
/** Distribute a local Scala collection to form an RDD.
*
* @note Parallelize acts lazily. If `seq` is a mutable collection and is altered after the call
* to parallelize and before the first action on the RDD, the resultant RDD will reflect the
* modified collection. Pass a copy of the argument to avoid this.
* @note avoid using `parallelize(Seq())` to create an empty `RDD`. Consider `emptyRDD` for an
* RDD with no partitions, or `parallelize(Seq[T]())` for an RDD of `T` with empty partitions.
*/
所以,我想我会做一个简单的测试。
scala> var c = List("a0", "b0", "c0", "d0", "e0", "f0", "g0")
c: List[String] = List(a0, b0, c0, d0, e0, f0, g0)
scala> var crdd = sc.parallelize(c)
crdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:26
scala> c = List("x1", "y1")
c: List[String] = List(x1, y1)
scala> crdd.foreach(println)
[Stage 0:> (0 + 0)/8]d0
a0
b0
e0
f0
g0
c0
scala>
我期待crdd.foreach(println)
输出 “x1
” 和 “y1
” 的基础上,parallelize
懒惰行为。
我在做什么错?
这就解释了。谢谢。 –