2016-04-27 88 views
1

我有一个简单的Spark作业,它读取大型日志文件,对它们进行过滤,并将结果写入新表格。简化的Scala驱动程序的应用代码是:Spark写入Postgresql。的BatchUpdateException?

val sourceRdd = sc.textFile(sourcePath) 

val parsedRdd = sourceRdd.flatMap(parseRow) 

val filteredRdd = parsedRdd.filter(l => filterLogEntry(l, beginDateTime, endDateTime)) 

val dataFrame = sqlContext.createDataFrame(filteredRdd) 

val writer = dataFrame.write 

val properties = new Properties() 
properties.setProperty("user", "my_user") 
properties.setProperty("password", "my_password") 
writer.jdbc("jdbc:postgresql://ip_address/database_name", "my_table", properties) 

这对小批量生产完美。在大批量两小时执行后,我看到约800万条记录的目标表和火花的工作已经失败,出现以下错误:

Caused by: java.sql.BatchUpdateException: Batch entry 524 INSERT INTO my_table <snip> was aborted. Call getNextException to see the cause. 
    at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:136) 
    at org.postgresql.core.v3.QueryExecutorImpl$ErrorTrackingResultHandler.handleError(QueryExecutorImpl.java:308) 
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2004) 
    at org.postgresql.core.v3.QueryExecutorImpl.flushIfDeadlockRisk(QueryExecutorImpl.java:1187) 
    at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1212) 
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:351) 
    at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:1019) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:210) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:277) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:276) 
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920) 
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

如果我复制粘贴给定的SQL INSERT语句成SQL控制台,它工作正常。在PostgreSQL服务器日志我看到:

(这是未修改/ unanonymized日志)

2012016-04-26 22:38:09 GMT [3769-12] [email protected] ERROR: syntax error at or near "was" at character 544 
2016-04-26 22:38:09 GMT [3769-13] [email protected] STATEMENT: INSERT INTO log_entries2 (client,host,req_t,request,seg,server,timestamp_value) VALUES ('68.67.161.5','"204.13.197.104"','0.000s','"GET /bid?apnx_id=&ip=67.221.130.195&idfa=&dmd5=&daid=&lt=32.90630&lg=-95.57920&u=branovate.com&ua=Mozilla%2F5.0+%28Linux%3B+Android+5.1%3B+XT1254+Build%2FSU3TL-39%3B+wv%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Version%2F4.0+Chrome%2F44.0.2403.90+Mobile+Safari%2F537.36+%5BFB_IAB%2FFB4A%3BFBAV%2F39.0.0.36.238%3B%5D&ap=&c=1&dmdl=&dmk= HTTP/1.1"','samba_info_has_geo','','2015-08-02T20:24:30.482000112') was aborted. Call getNextException to see the cause. 

好像星火发送文本“被中止呼叫...的getNextException”到PostgreSQL从而引发这一特定错误。这似乎是一个合法的Spark错误。第二个问题是为什么Spark首先放弃了这一点?

所以,afaik,我不能调用getNextException,因为我没有直接使用JDBC,而是通过Spark。

仅供参考,这是Spark 1.6.1和Scala 2.11。

+0

VALUE子句中的项以奇怪的方式引用,例如'VALUES('“2016-04-27”',...)“,当服务器需要将它们解释为日期。 – wildplasser

+0

所以,是的,在“请求”和“主机”文本字段中有一个额外的引用问题,应该固定清洁,但它不会导致任何错误。一个日期字段当前被视为一个字符串,并没有多余的引用问题。将date字段设置为适当的postgresql日期类型而不是字符串可能会更好,但这不会导致问题。 – clay

+0

对于叮咬,允许包含引号。对于日期和时间以及时间戳,引号将使它们不可分段。 – wildplasser

回答

0

如果其他人正在搜索并点击它,我的数据库服务器(运行在VM中)达到磁盘空间限制,Spark似乎被这个错误困惑,不记录真正的错误,导致不同的内部错误,并且记录那个结果。从技术上讲,这可能是内部Spark响应不常见的数据库磁盘满错误的错误。