2015-10-05 80 views
2

我想用我自己的自定义serde HiveQL(它与纯Hive正常工作)。我遵循以下指令:https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+StartedHive on Spark>纱线模式>火花配置>什么值给spark.master

但我对这部分非常困惑:启动Spark群集(支持独立和Spark on YARN)。 根据我的理解,如果Spark在独立模式下运行,我们只需要启动Spark群集。但是我打算在Yarn上运行Spark,是否需要启动Spark集群?我所做的是:我刚刚开始使用Hadoop Yarn,因为我真的不知道如何设置属性spark.master,我只是没有设置它。

2015-10-05 20:42:07,184 INFO [main]: status.SparkJobMonitor (RemoteSparkJobMonitor.java:startMonitor(67)) - Job hasn't been submitted after 61s. Abor 

婷是:可能是因为这个设置的,我运行一个蜂巢的查询,它使用我自己的SERDE时遇到错误消息。

2015-10-05 20:42:07,184 ERROR [main]: status.SparkJobMonitor (SessionState.java:printError(960)) - Status: SENT 
2015-10-05 20:42:07,184 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=SparkRunJob start=1444066866174 end=1444066927184 duration=61010 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor> 
2015-10-05 20:42:07,300 ERROR [main]: ql.Driver (SessionState.java:printError(960)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask 
2015-10-05 20:42:07,300 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1444066848958 end=1444066927300 duration=78342 from=org.apache.hadoop.hive.ql.Driver> 

...

在端部也有以下不同之处:

2015-10-05 20:42:16,658 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/10/05 20:42:16 INFO yarn.Client: Application report for application_1444066615793_0001 (state: ACCEPTED) 
2015-10-05 20:42:17,337 WARN [main]: client.SparkClientImpl (SparkClientImpl.java:stop(154)) - Timed out shutting down remote driver, interrupting... 
2015-10-05 20:42:17,337 WARN [Driver]: client.SparkClientImpl (SparkClientImpl.java:run(430)) - Waiting thread interrupted, killing child process. 
2015-10-05 20:42:17,345 WARN [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(572)) - Error in redirector thread. 
java.io.IOException: Stream closed 
    at  java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162) 
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:272) 
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
    at java.io.InputStreamReader.read(InputStreamReader.java:184) 
    at java.io.BufferedReader.fill(BufferedReader.java:154) 
    at java.io.BufferedReader.readLine(BufferedReader.java:317) 
    at java.io.BufferedReader.readLine(BufferedReader.java:382) 
    at org.apache.hive.spark.client.SparkClientImpl$Redirector.run(SparkClientImpl.java:568) 
    at java.lang.Thread.run(Thread.java:745) 

2015年10月5日20:42:17371 INFO [线程15]:会话。 SparkSessionManagerImpl(SparkSessionManagerImpl.java:shutdown(146)) - 关闭会话管理器。

忠实希望任何人都可以给一些建议,非常感谢提前

回答

1

请尝试set spark.master=yarn-client;

2

由于从官方文档Spark on YARN,各位高手将基本:

  • 纱-cluster:如果您要提交作业以启动或
  • yarn-client:如果你想要实例SparkContext本地

不要忘了有configurarion文件(核心-site.xml中,HDFS-site.xml中,纱的site.xml,mapred-site.xml中,蜂房-site.xml等)可用于HADOOP_CONF_DIRYARN_CONF_DIR。您可以设置这些变量为<spark_home>/conf/spark-env.sh