2017-07-14 185 views
1

我正在使用Apache Spark 2.1.1,并且希望使用外部Hive Metastore(特别针对Spark Thrift Server)进行设置。如何在MySQL中使用Hive Metastore(用于Thrift Server或Spark-Shell)?

我已经加入hive-site.xml$SPARK_HOME/conf文件夹如下:

<?xml version="1.0"?> 
<configuration> 
    <property> 
    <name>javax.jdo.option.ConnectionURL</name> 
    <value>jdbc:mysql://home.cu:3306/hive_metastore?createDatabaseIfNotExist=true&amp;useLegacyDatetimeCode=false&amp;serverTimezone=Europe/Berlin&amp;nullNamePatternMatchesAll=true </value> 
    <description>JDBC connect string for a JDBC metastore</description> 
    </property> 

    <property> 
    <name>javax.jdo.option.ConnectionDriverName</name> 
    <value>com.mysql.jdbc.Driver</value> 
    <description>Driver class name for a JDBC metastore</description> 
    </property> 

    <property> 
    <name>javax.jdo.option.ConnectionUserName</name> 
    <value>hive</value> 
    <description>username to use against metastore database</description> 
    </property> 

    <property> 
    <name>javax.jdo.option.ConnectionPassword</name> 
    <value>hive</value> 
    <description>password to use against metastore database</description> 
    </property> 
    <property> 
    <name>hive.metastore.schema.verification</name> 
    <value>false</value> 
    <description>password to use against metastore database</description> 
    </property> 

    <property> 
    <name>hive.metastore.warehouse.dir</name> 
    <value>hdfs://spark-master.cu:9000/value_iq/hive_warehouse/</value> 
    <description>Warehouse Location</description> 
    </property> 
</configuration> 

每当我尝试运行spark-shell或Spark节俭服务器,他们试图创建MySQL的蜂房metastore(因为没有metastore还)他们失败,出现以下错误:

17/07/13 19:57:55 ERROR Datastore: Error thrown executing ALTER TABLE `PARTITIONS` ADD COLUMN `TBL_ID` BIGINT NULL : Table 'hive_metastore.partitions' doesn't exist 
java.sql.SQLSyntaxErrorException: Table 'hive_metastore.partitions' doesn't exist 
     at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:536) 
     at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:513) 
     at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:115) 
     at com.mysql.cj.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:1983) 
     at com.mysql.cj.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:1936) 
     at com.mysql.cj.jdbc.StatementImpl.executeInternal(StatementImpl.java:891) 
     at com.mysql.cj.jdbc.StatementImpl.execute(StatementImpl.java:795) 
     at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254) 
     at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:760) 
     at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatementList(AbstractTable.java:711) 
     at org.datanucleus.store.rdbms.table.TableImpl.validateColumns(TableImpl.java:259) 
     at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3393) 
     at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190) 
     at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841) 
     at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122) 
     at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605) 
     at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954) 
     at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:679) 
     at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:408) 
     at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:947) 
     at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:370) 
     at org.datanucleus.store.query.Query.executeQuery(Query.java:1744) 
     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) 
     at org.datanucleus.store.query.Query.execute(Query.java:1654) 
     at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221) 

回答

0

我已经找到了问题,它与MySQL驱动有关,我用mysql-connector-java-6.0.6-bin.jar,我已经用它代替,老年人mysql-connector-java-5.1.23-bin.jar现在它的工作原理。

0

我不认为你的仓库目录属性被正确配置,它应该是在HDFS

路径
<configuration> 
<property> 
    <name>hive.metastore.uris</name> 
    <value>thrift://maprdemo:9083</value> 
</property> 
<property> 
    <name>hive.metastore.warehouse.dir</name> 
    <value>/user/hive/warehouse</value> 
</property> 

+0

但它是在HDFS的路径:' hive.metastore.warehouse.dir HDFS://spark-master.cu:9000/value_iq/hive_warehouse/ 仓库位置 ' – Jose

+0

确定它是可访问的,如果您有Hue而不是导航到它。如果这是正确的比做一个hadoop fs -ls应该产生仓库目录,但我会像这样生活hdfs:// value_iq/hive_warehouse /(删除地址和端口) – dumitru

+0

是的,它是可访问的,但那是什么与Metastore创建脚本(或任何他们使用的)失败的事实有关,因为试图改变表分区,但不存在:ERROR Datastore:执行ALTER TABLE PARTITIONS时抛出的错误ADD COLUMN TBL_ID BIGINT NULL:表hive_metastore.partitions不存在 java.sql.SQLSyntaxErrorException:表'hive_metastore.partitions'不存在 ' – Jose

相关问题