2016-09-23 76 views
-1

我在运行Spark代码以从MYsql中获取数据时遇到以下异常。可以soneone请帮忙。从Spark中加载Mysql数据时发生异常

代码如下

private static final String MYSQL_CONNECTION_URL = "jdbc:mysql://localhost:3306/company"; 
    private static final String MYSQL_USERNAME = "test"; 
    private static final String MYSQL_PWD = "test123"; 

    private static final SparkSession sparkSession = 
      SparkSession.builder().master("local[*]").appName("Spark2JdbcDs") 
         .config("spark.sql.warehouse.dir", "file:///tmp/tmp_warehouse") 
         .getOrCreate(); 

    public static void main(String[] args) { 
     //JDBC connection properties 
     final Properties connectionProperties = new Properties(); 
     connectionProperties.put("user", MYSQL_USERNAME); 
     connectionProperties.put("password", MYSQL_PWD); 

     Dataset<Row> jdbcDF = sparkSession.sql("SELECT * FROM emp"); 

     List<Row> employeeFullNameRows = jdbcDF.collectAsList(); 

     for (Row employeeFullNameRow : employeeFullNameRows) { 
      LOGGER.info(employeeFullNameRow); 
     } 

16/09/23 13点17分55秒INFO internal.SharedState:仓库路径是 '文件:/// TMP/tmp_warehouse'。 16/09/23 13:17:55 INFO execution.SparkSqlParser:解析命令:SELECT * FROM emp 线程“main”中的异常java.lang.UnsupportedOperationException:未由DistributedFileSystem FileSystem实现实现 at org.apache.hadoop .fs.FileSystem.getScheme(FileSystem.java:217) at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2624) at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java :2624) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2634) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2651) at org.apache.hadoop .fs.FileSystem.access $ 200(FileSystem.java:92) at org.apache.hadoop.fs.FileSystem $ Cache.getInternal(F ileSystem.java:2687) at org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.spark.sql.catalyst.catalog.SessionCatalog.makeQualifiedPath(SessionCatalog.scala:115) at org.apache.spark。 sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:145) at org.apache.spark.sql.catalyst.catalog.SessionCatalog。(SessionCatalog.scala:89) at org.apache.spark.sql.internal .SessionState.catalog $ lzycompute(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState.catalog(SessionState.scala:95) at org.apache.spark.sql.internal.SessionState $$不久$ 1 (SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer $ lzycompute(SessionState.scala:112) at org.apache.spark.sql.internal.SessionState.analyzer(SessionState。阶:111) 在org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49) 在org.apache.spark.sql.Dataset $ .ofRows(Dataset.scala:64) 在有机.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382) at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:238) at org.apache.spark.sql.DataFrameReader.jdbc (DataFrameReader.scala:194) at sparksql.sparksql1.main(sparksql1.java:40)

以下是pom文件

<!-- Hadoop Mapreduce Client Core --> 
    <dependency> 
     <groupId>org.apache.hadoop</groupId> 
     <artifactId>hadoop-mapreduce-client-core</artifactId> 
     <version>2.7.1</version> 
    </dependency> 

    <dependency> 
     <groupId>org.apache.hadoop</groupId> 
     <artifactId>hadoop-common</artifactId> 
     <version>2.7.1</version> 
    </dependency> 

    <!-- Hadoop Core --> 
    <dependency> 
     <groupId>org.apache.hadoop</groupId> 
     <artifactId>hadoop-core</artifactId> 
     <version>1.2.1</version> 
    </dependency> 

    <!-- Spark --> 
    <dependency> 
     <groupId>org.apache.spark</groupId> 
     <artifactId>spark-core_2.10</artifactId> 
     <version>2.0.0</version> 
    </dependency> 

    <!-- Spark SQL --> 
    <dependency> 
     <groupId>org.apache.spark</groupId> 
     <artifactId>spark-sql_2.10</artifactId> 
     <version>2.0.0</version> 
    </dependency> 

    <!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java --> 
    <dependency> 
     <groupId>mysql</groupId> 
     <artifactId>mysql-connector-java</artifactId> 
     <version>5.1.20</version> 
    </dependency> 
+0

错过斜线:“文件:/// tmp/tmp_warehouse” – lege

+0

试图,以及没有运气。 – Mike

回答

0

您已经在pom.xml中添加了hadoop-core和hadoop-common。 删除hadoop核心,并尝试它

+0

做这项工作? –

相关问题