2016-06-28 134 views
3

我写了一个类,获取DataFrame,做一些计算,并可以导出结果。数据框由键列表生成。我知道我是在一个非常效率不高的方式,现在这样做:并行/避免在火花foreach循环

var l = List(34, 32, 132, 352)  // Scala List 

l.foreach{i => 
    val data:DataFrame = DataContainer.getDataFrame(i) // get DataFrame 
    val x = new MyClass(data)      // initialize MyClass with new Object 
    x.setSettings(...) 
    x.calcSomething() 
    x.saveResults()        // writes the Results into another Dataframe that is saved to HDFS 
} 

我觉得斯卡拉名单上的foreach不平行,所以怎么能我尽量避免使用这里的foreach? DataFrames的计算可能会并行进行,因为计算结果不是为下一个DataFrame输入的 - 我如何实现这一点?

非常感谢你!

__edit:

什么我试图做的:

val l = List(34, 32, 132, 352)  // Scala List 
var l_DF:List[DataFrame] = List() 
l.foreach{ i => 
    DataContainer.getDataFrame(i)::l  //append DataFrame to List of Dataframes 
} 

val rdd:DataFrame = sc.parallelize(l) 
rdd.foreach(data => 
    val x = new MyClass(data) 
) 

但给人

Invalid tree; null: 
null 

编辑2: 好吧,我不弄evrything是如何工作的引擎盖下。 ...

1)一切工作正常,当我在火花外壳执行这个

spark-shell –driver-memory 10g  
//... 
var l = List(34, 32, 132, 352)  // Scala List 

l.foreach{i => 
    val data:DataFrame = AllData.where($"a" === i) // get DataFrame 
    val x = new MyClass(data)      // initialize MyClass  with new Object 
    x.calcSomething() 
} 

2)错误,当我开始同与

spark-shell --master yarn-client --num-executors 10 –driver-memory 10g 
// same code as above 
java.util.concurrent.RejectedExecutionException: Task [email protected] rejected from [email protected][Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1263] 
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047) 
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823) 
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369) 
    at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133) 
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
    at scala.concurrent.Promise$class.complete(Promise.scala:55) 
    at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) 
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324) 
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324) 
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
    at org.spark-project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) 
    at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133) 
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
    at scala.concurrent.Promise$class.complete(Promise.scala:55) 
    at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) 
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) 
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) 
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 

3)当我尝试并行化,我得到一个错误,太

spark-shell --master yarn-client --num-executors 10 –driver-memory 10g 
//... 
var l = List(34, 32, 132, 352).par 
// same code as above, just parallelized before calling foreach 
// i can see the parallel execution by the console messages (my class gives some and they are printed out parallel now instead of sequentielly 

scala.collection.parallel.CompositeThrowable: Multiple exceptions thrown during a parallel computation: java.lang.IllegalStateException: SparkContext has been shutdown 
org.apache.spark.SparkContext.runJob(SparkContext.scala:1816) 
org.apache.spark.SparkContext.runJob(SparkContext.scala:1837) 
org.apache.spark.SparkContext.runJob(SparkContext.scala:1850) 
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215) 
    org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207) 
org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385) 
org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1385) 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1903) 
org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1384) 
. 
. 
. 

java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext     org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104) 
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1320) 
    org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:104) 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) 
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58) 
scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59) 
org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54) 
org.apache.spark.sql.execution.SparkStrategies$EquiJoinSelection$.makeBroadcastHashJoin(SparkStrategies.scala:92) 
org.apache.spark.sql.execution.SparkStrategies$EquiJoinSelection$.apply(SparkStrategies.scala:104) 

实际上有更超过10个执行者,但有4个节点。我从不配置spark-context。它已经在启动时给出。

+0

请提供错误完整的堆栈跟踪。另外'DataContainer.getDataFrame(i):: l'行看起来不正确。 –

回答

2

您可以使用scala的parallel collections在驱动程序端实现foreach并行度。

val l = List(34, 32, 132, 352).par 
l.foreach{i => // your code to be run in parallel for each i} 

*然而,谨慎的说法是:您的集群能够并行运行作业吗?您可以并行地将作业提交给您的Spark集群,但最终可能会在群集中排队并按顺序执行。

+0

谢谢!我正在使用的集群有几个执行程序。这是最有效的方式吗?我的解决方案做什么(请参阅编辑) – johntechendso

+1

请从Spark文档查看此内容 - http://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application 以下是相关报价: “默认情况下,Spark的调度程序以FIFO方式运行作业。[...]从Spark 0开始。8,也可以配置作业之间的公平分享。在公平分享下,Spark以“循环”方式在作业之间分配任务,以便所有作业获得大致相等的群集资源份额。 要启用公平调度器,只需配置SparkContext时设置spark.scheduler.mode物业公平。” –

+0

您是否使用了火花独立集群或纱? –

0

您可以使用scala的Future和Spark Fair Scheduling,例如,

import scala.concurrent._ 
import scala.concurrent.duration._ 
import ExecutionContext.Implicits.global 

object YourApp extends App { 
    val sc = ... // SparkContext, be sure to set spark.scheduler.mode=FAIR 
    var pool = 0 
    // this is to have different pools per job, you can wrap it to limit the no. of pools 
    def poolId = { 
    pool = pool + 1 
    pool 
    } 
    def runner(i: Int) = Future { 
    sc.setLocalProperty("spark.scheduler.pool", poolId) 
    val data:DataFrame = DataContainer.getDataFrame(i) // get DataFrame 
    val x = new MyClass(data)      // initialize MyClass with new Object 
    x.setSettings(...) 
    x.calcSomething() 
    x.saveResults() 
    } 

    val l = List(34, 32, 132, 352)  // Scala List 
    val futures = l map(i => runner(i)) 

    // now you need to wait all your futures to be completed 
    futures foreach(f => Await.ready(f, Duration.Inf)) 

} 

使用FairScheduler和不同的池,每个并发作业将拥有Spark集群资源的公平份额。

有关斯卡拉未来的一些参考文献here。您可能需要在完成,成功和/或失败时添加必要的回调。