2017-02-28 67 views
1

我在AWS Elastic Map Reduce 5.3.1中使用spark-shell与Spark 2.1.0从Postgres数据库加载数据。 loader.load总是失败,然后成功。为什么会发生?关于EMR Spark,JDBC加载首次失败,然后运行

[[email protected][SNIP] ~]$ SPARK_PRINT_LAUNCH_COMMAND=1 spark-shell --driver-class-path ~/postgresql-42.0.0.jar 
Spark Command: /etc/alternatives/jre/bin/java -cp /home/hadoop/postgresql-42.0.0.jar:/usr/lib/spark/conf/:/usr/lib/spark/jars/*:/etc/hadoop/conf/ -Dscala.usejavacp=true -Xmx640M -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError=kill -9 %p org.apache.spark.deploy.SparkSubmit --conf spark.driver.extraClassPath=/home/hadoop/postgresql-42.0.0.jar --class org.apache.spark.repl.Main --name Spark shell spark-shell 
======================================== 
Setting default log level to "WARN". 
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 
17/02/28 17:17:52 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME. 
17/02/28 17:18:56 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException 
Spark context Web UI available at http://[SNIP] 
Spark context available as 'sc' (master = yarn, app id = application_1487878172787_0014). 
Spark session available as 'spark'. 
Welcome to 
     ____    __ 
    /__/__ ___ _____/ /__ 
    _\ \/ _ \/ _ `/ __/ '_/ 
    /___/ .__/\_,_/_/ /_/\_\ version 2.1.0 
     /_/ 

Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_121) 
Type in expressions to have them evaluated. 
Type :help for more information. 

scala> val loader = spark.read.format("jdbc") // connection options removed 
loader: org.apache.spark.sql.DataFrameReader = [email protected] 

scala> loader.load 
java.sql.SQLException: No suitable driver 
    at java.sql.DriverManager.getDriver(DriverManager.java:315) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$7.apply(JDBCOptions.scala:84) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:83) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:34) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32) 
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125) 
    ... 48 elided 

scala> loader.load 
res1: org.apache.spark.sql.DataFrame = [id: int, fsid: string ... 4 more fields] 
+0

你有没有遇到一个解决方案吗?在当前的EMR版本中看到相同的行为。还要ping @Raje。 – kadrach

+0

解决了我的问题:) – kadrach

回答

0

我也遇到了同样的问题。我正尝试通过使用JDBC的Spark连接到Vertica。我使用: 火花壳 星火版本2.2.0 Java版本是1.8

外部罐子连接: Vertica的-8.1.1_spark2.1_scala2.11-20170623.jar Vertica的-JDBC-8.1。 1-0.jar

代码连接:

import java.sql.DriverManager 
import com.vertica.jdbc.Driver 


val jdbcUsername = "<username>" 
val jdbcPassword = "<password>" 
val jdbcHostname = "<vertica server>" 
val jdbcPort = <vertica port> 
val jdbcDatabase ="<vertica DB>" 
val jdbcUrl = s"jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword}" 

val connectionProperties = new Properties() 
connectionProperties.put("user", jdbcUsername) 
connectionProperties.put("password", jdbcPassword) 

val connection = DriverManager.getConnection(jdbcUrl, connectionProperties) 
java.sql.SQLException: No suitable driver found for jdbc:vertica://${jdbcHostname}:${jdbcPort}/${jdbcDatabase}?user=${jdbcUsername}&password=${jdbcPassword} 

    at java.sql.DriverManager.getConnection(Unknown Source) 
    at java.sql.DriverManager.getConnection(Unknown Source) 
    ... 56 elided 

如果我运行相同的命令第二次,我得到下面的输出,并建立连接

scala> val connection = DriverManager.getConnection(jdbcUrl, connectionProperties) 
connection: java.sql.Connection = [email protected] 
0

今天我用PySpark和sqlserver jdbc驱动程序遇到了这个问题。起初,我建立了一个简单的解决方法 - 捕获Py4JJavaException并重试,在第二次工作的地方。

诀窍是在DataStreamReader.jdbc方法中指定驱动程序类。

使用pyspark:

spark.read.jdbc(..., properties={'driver': 'com.microsoft.sqlserver.jdbc.SQLServerDriver'}) 

那么这就是所有需要的是

spark-submit --jars s3://somebucket/sqljdbc42.jar script.py 

使用Scala和@ Raje的例子,connectionProperties.put("driver", "...")