2016-12-30 103 views
1

我添加了这个项目所需的所有jar,但是我无法解决这个exception。任何人都可以给出关于这个的建议。 您是否也可以告诉告诉如何给予配置单元数据库访问权限。 在此先感谢。Hive ClassNotFoundException即使所有的jar都添加到maven仓库中

java.lang.ClassNotFoundException: org.apache.hadoop.hive.jdbc.HiveDriver 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
    at java.lang.Class.forName0(Native Method) 
    at java.lang.Class.forName(Class.java:190) 
    at org.ezytruk.com.CreateHiveExternalTable.createHiveExternalTable(CreateHiveExternalTable.java:20) 
    at org.ezytruk.com.CreateHiveExternalTable.main(CreateHiveExternalTable.java:53) 
Exception in thread "main" java.sql.SQLException: No suitable driver found for jdbc:hive://localhost/EZYTRUK 
    at java.sql.DriverManager.getConnection(DriverManager.java:596) 
    at java.sql.DriverManager.getConnection(DriverManager.java:215) 
    at org.ezytruk.com.CreateHiveExternalTable.createHiveExternalTable(CreateHiveExternalTable.java:39) 
    at org.ezytruk.com.CreateHiveExternalTable.main(CreateHiveExternalTable.java:53) 

的pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 
    <modelVersion>4.0.0</modelVersion> 
    <groupId>BigData</groupId> 
    <artifactId>BigData</artifactId> 
    <version>0.0.1-SNAPSHOT</version> 
    <properties> 
    <slf4j.version>1.6.1</slf4j.version> 
    <hadoop-version>2.6.0</hadoop-version> 
    <mysql-connector-version>5.1.40</mysql-connector-version> 
    <sqoop-core-version>1.99.3</sqoop-core-version> 
    <zookeeper-version>3.4.9</zookeeper-version> 
    <hive-jdbc-version>1.2.1</hive-jdbc-version> 
    <commons-io-version>2.2</commons-io-version> 
    <commons-logging.version>1.2</commons-logging.version> 
    </properties> 
    <dependencies> 
    <dependency> 
    <groupId>commons-io</groupId> 
    <artifactId>commons-io</artifactId> 
    <version>${commons-io-version}</version> 
</dependency> 
<dependency> 
     <groupId>commons-logging</groupId> 
     <artifactId>commons-logging</artifactId> 
     <version>${commons-logging.version}</version> 
    </dependency>   
    <dependency> 
    <groupId>mysql</groupId> 
    <artifactId>mysql-connector-java</artifactId> 
    <version>${mysql-connector-version}</version> 
    </dependency> 
    <dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-common</artifactId> 
    <version>${hadoop-version}</version> 
</dependency> 
    <dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-client</artifactId> 
    <version>${hadoop-version}</version> 
</dependency> 
    <dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-hdfs</artifactId> 
    <version>${hadoop-version}</version> 
</dependency> 
    <dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-mapreduce-client-core</artifactId> 
    <version>${hadoop-version}</version> 
</dependency> 
    <dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-yarn-common</artifactId> 
    <version>${hadoop-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-core</artifactId> 
    <version>1.2.1</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.sqoop</groupId> 
    <artifactId>sqoop-core</artifactId> 
    <version>${sqoop-core-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.sqoop</groupId> 
    <artifactId>sqoop-client</artifactId> 
    <version>${sqoop-core-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.sqoop</groupId> 
    <artifactId>sqoop-common</artifactId> 
    <version>${sqoop-core-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.sqoop.connector</groupId> 
    <artifactId>sqoop-connector-generic-jdbc</artifactId> 
    <version>${sqoop-core-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.sqoop</groupId> 
    <artifactId>sqoop</artifactId> 
    <version>1.4.1-incubating</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.zookeeper</groupId> 
    <artifactId>zookeeper</artifactId> 
    <version>${zookeeper-version}</version> 
</dependency> 

<dependency> 
    <groupId>org.apache.hive</groupId> 
    <artifactId>hive-jdbc</artifactId> 
    <version>${hive-jdbc-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.hive</groupId> 
    <artifactId>hive-exec</artifactId> 
    <version>${hive-jdbc-version}</version> 
</dependency> 

<dependency> 
    <groupId>org.apache.hive</groupId> 
    <artifactId>hive-metastore</artifactId> 
    <version>${hive-jdbc-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.hive</groupId> 
    <artifactId>hive-common</artifactId> 
    <version>${hive-jdbc-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.hive</groupId> 
    <artifactId>hive-service</artifactId> 
    <version>${hive-jdbc-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.hive</groupId> 
    <artifactId>hive-shims</artifactId> 
    <version>${hive-jdbc-version}</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.hive</groupId> 
    <artifactId>hive-serde</artifactId> 
    <version>${hive-jdbc-version}</version> 
</dependency> 

</dependencies> 
    <packaging>war</packaging> 
    <build> 
    <sourceDirectory>src</sourceDirectory> 
    <plugins> 
     <plugin> 
     <artifactId>maven-compiler-plugin</artifactId> 
     <version>3.3</version> 
     <configuration> 
      <source>1.7</source> 
      <target>1.7</target> 
     </configuration> 
     </plugin> 
     <plugin> 
     <artifactId>maven-war-plugin</artifactId> 
     <version>2.6</version> 
     <configuration> 
      <warSourceDirectory>WebContent</warSourceDirectory> 
     </configuration> 
     </plugin> 
    </plugins> 
    </build> 
</project> 

计划:

package org.hive.com; 

    import java.io.FileNotFoundException; 
    import java.io.IOException; 
    import java.sql.Connection; 
    import java.sql.DriverManager; 
    import java.sql.SQLException; 

    import org.apache.hadoop.conf.Configuration; 
    import org.apache.hadoop.fs.Path; 

    import com.mysql.jdbc.Statement; 

    public class CreateHiveExternalTable { 

     public static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; 

     public static void createHiveExternalTable() throws FileNotFoundException, IOException, SQLException { 
      try { 
       Class.forName(driverName); 
      } catch (ClassNotFoundException e) { 
       // TODO Auto-generated catch block 
       e.printStackTrace(); 
      } 

      Configuration config = new Configuration(); 
      config.addResource(new Path("/usr/local/hadoop/etc/hadoop/conf/core-site.xml")); 
      config.addResource(new Path("/usr/local/hadoop/etc/hadoop/conf/hdfs-site.xml")); 



     Connection connect = DriverManager.getConnection("jdbc:hive://localhost/hivedb","hive",""); 
      Statement stmt = (Statement) connect.createStatement(); 
      //String tableName = properties.getProperty("hive_table_name"); 
      stmt.executeQuery("CREATE EXTERNAL TABLE IF NOT EXISTS" 
      +"SHIPPER(S_ID INT,S_NAME VARCHAR(100),S_ADDR VARCHAR(100),S_CITY VARCHAR(100)" 
      +"ROW FORMAT DELIMITED FIELDS TERMINATED BY ','" 
      +"LOCATION 'hdfs://localhost://hive'"); 

      System.out.println("Table created."); 
      connect.close(); 
     } 

     public static void main(String[] args) throws FileNotFoundException, IOException, SQLException{ 
      CreateHiveExternalTable hiveTable = new CreateHiveExternalTable(); 
      hiveTable.createHiveExternalTable(); 
     }  

     }  
+0

您使用的任何代码示例? – Yeikel

+0

public class CreateHiveExternalTable { \t public static String driverName =“org.apache.hadoop.hive.jdbc.HiveDriver”; \t \t公共静态无效createHiveExternalTable()抛出FileNotFoundException异常,IOException异常,{的SQLException \t \t尝试{ \t \t \t的Class.forName(驱动程序名); \t \t}赶上(ClassNotFoundException的发送){ \t \t \t // TODO自动生成的catch程序块 \t \t \t e.printStackTrace(); \t \t} – Yasodhara

+0

你可以看到上面我添加的代码 – Yasodhara

回答

1

hive.server2.thrift.port是你可以检查端口的属性。

在蜂巢外壳给命令“设置hive.server2.thrift.port”这会给你蜂箱端口设置为10000的蜂巢

默认的端口号,但你可以再检查一下使用上述命令蜂巢壳..

+0

非常感谢你 – Yasodhara

1

从这个帖子Connect from Java to Hive using JDBC

尝试

private static String driverName = "org.apache.hive.jdbc.HiveDriver" 

,而不是

private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; 

我希望你已经在你的代码 添加Class.forName(driverName)声明

另外:中

Connection connect = DriverManager.getConnection("jdbc:hive2://localhost:HIVEPORT/hivedb","hive",""); 

代替

Connection connect = DriverManager.getConnection("jdbc:hive://localhost/hivedb","hive",""); 

我不知道你正在运行的端口配置单元,但记得要改变这一行

localhost:HIVEPORT 
+0

可以请你告诉我如何找到配置单元端口号,因为直到现在我直接使用配置单元直接通过终端,但不使用java ... – Yasodhara

相关问题