2016-08-20 118 views
2

我有一个带有下划线文件的外部分区Hive表格ROW FORMAT DELIMITED FIELDS TERMINATED BY'|' 通过Hive直接读取数据就好,但是当使用Spark的Dataframe API时,分隔符'|'没有考虑到。Spark HiveContext - 从外部分区读取Hive表分隔符问题

创建外部分区表:

hive> create external table external_delimited_table(value1 string, value2 string) 
partitioned by (year string, month string, day string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' 
location '/client/edb/poc_database/external_delimited_table'; 

创建数据文件,方含只有一行,并放置在外部分区表的位置:

shell>echo "one|two" >> table_data.csv 
shell>hadoop fs -mkdir -p /client/edb/poc_database/external_delimited_table/year=2016/month=08/day=20 
shell>hadoop fs -copyFromLocal table_data.csv /client/edb/poc_database/external_delimited_table/year=2016/month=08/day=20 

制作活动分区:

hive> alter table external_delimited_table add partition (year='2016',month='08',day='20'); 

理智检查:

hive> select * from external_delimited_table; 
select * from external_delimited_table; 
+----------------------------------+----------------------------------+--------------------------------+---------------------------------+-------------------------------+--+ 
| external_delimited_table.value1 | external_delimited_table.value2 | external_delimited_table.year | external_delimited_table.month | external_delimited_table.day | 
+----------------------------------+----------------------------------+--------------------------------+---------------------------------+-------------------------------+--+ 
| one        | two        | 2016       | 08        | 20 

火花代码:

import org.apache.spark.sql.DataFrame 
import org.apache.spark.sql.hive.HiveContext 
import org.apache.spark.{SparkContext, SparkConf} 
object TestHiveContext { 

    def main(args: Array[String]): Unit = { 

    val conf = new SparkConf().setAppName("Test Hive Context") 

    val spark = new SparkContext(conf) 
    val hiveContext = new HiveContext(spark) 

    val dataFrame: DataFrame = hiveContext.sql("SELECT * FROM external_delimited_table") 
    dataFrame.show() 

    spark.stop() 
    } 

dataFrame.show()输出:

+-------+------+----+-----+---+ 
| value1|value2|year|month|day| 
+-------+------+----+-----+---+ 
|one|two| null|2016| 08| 20| 
+-------+------+----+-----+---+ 

回答

2

这原来是用火花版本1.5.0中的问题。在版本1.6.0中不会发生问题:

scala> sqlContext.sql("select * from external_delimited_table") 
res2: org.apache.spark.sql.DataFrame = [value1: string, value2: string, year: string, month: string, day: string] 

scala> res2.show 
+------+------+----+-----+---+ 
|value1|value2|year|month|day| 
+------+------+----+-----+---+ 
| one| two|2016| 08| 20| 
+------+------+----+-----+---+