2017-07-18 64 views
1

我的代码Scala的火花读JSON等作为下面

val sparkConf = new SparkConf().setAppName("Json Test").setMaster("local[*]") 
val sc = new SparkContext(sparkConf) 
val sqlContext = new org.apache.spark.sql.SQLContext(sc) 
import sqlContext.implicits._ 

val path = "/path/log.json" 
val df = sqlContext.read.json(path) 
df.show() 

样品JSON数据

{ “IFAM”: “EQR”, “KTM”:1430006400000 “COL”:21, “DATA” :[{“MLrate”:“30”,“Nrout”:“0”,“up”:null,“Crate”:“2”},{“MLrate”:“31”,“Nrout” , “上”:NULL, “板条箱”: “2”},{ “MLrate”: “30”, “Nrout”: “5”, “上”:NULL, “板条箱”: “2”},{” MLrate “:” 34" , “Nrout”: “0”, “上”:NULL, “板条箱”: “4”},{ “MLrate”: “33”, “Nrout”: “0”, “上” :null,“Crate”:“2”},{“MLrate”:“30”,“Nrout”:“8”,“up”:null,“Crate”:“2”}]}

In scala ide发生错误,我无法理解是:

INFO SharedState: Warehouse path is 'file:/C:/Users/ben53/workspace/Demo/spark-warehouse/'. Exception in thread "main" java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.hive.orc.DefaultSource could not be instantiated at java.util.ServiceLoader.fail(Unknown Source) at java.util.ServiceLoader.access$100(Unknown Source) at java.util.ServiceLoader$LazyIterator.nextService(Unknown Source) at java.util.ServiceLoader$LazyIterator.next(Unknown Source) at java.util.ServiceLoader$1.next(Unknown Source) at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247) at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259) at scala.collection.AbstractTraversable.filter(Traversable.scala:104) at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:575) at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86) at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152) at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:298) at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:251) at com.dataflair.spark.QueryLog$.main(QueryLog.scala:27) at com.dataflair.spark.QueryLog.main(QueryLog.scala) Caused by: java.lang.VerifyError: Bad return type Exception Details: Location: org/apache/spark/sql/hive/orc/DefaultSource.createRelation(Lorg/apache/spark/sql/SQLContext;[Ljava/lang/String;Lscala/Option;Lscala/Option;Lscala/collection/immutable/Map;)Lorg/apache/spark/sql/sources/HadoopFsRelation; @35: areturn Reason: Type 'org/apache/spark/sql/hive/orc/OrcRelation' (current frame, stack[0]) is not assignable to 'org/apache/spark/sql/sources/HadoopFsRelation' (from method signature) Current Frame: bci: @35 flags: { } locals: { 'org/apache/spark/sql/hive/orc/DefaultSource', 'org/apache/spark/sql/SQLContext', '[Ljava/lang/String;', 'scala/Option', 'scala/Option', 'scala/collection/immutable/Map' } stack: { 'org/apache/spark/sql/hive/orc/OrcRelation' } Bytecode: 0x0000000: b200 1c2b c100 1ebb 000e 592a b700 22b6 0x0000010: 0026 bb00 2859 2c2d b200 2d19 0419 052b 0x0000020: b700 30b0

at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Unknown Source) at java.lang.Class.getConstructor0(Unknown Source) at java.lang.Class.newInstance(Unknown Source) ... 20 more


+0

是您的JSON有效吗? –

+0

你能分享你的pom文件或sbt文件和json文件的样本吗? –

+0

雅,它是一个有效的json,我编译和运行使用eclipse scala ide scala版本2.1 – hpkong

回答

0

我很确定你的路径不正确。检查文件是否存在于指定的路径中。 Json是有效的。

+0

我的路径100%正确 – hpkong

+0

我试着用你的json文件。它为我工作 –

1

该路径应该是正确的。但提供的JSON无效。请更正示例JSON,然后尝试。 您可以验证JSON https://jsonlint.com/

它显示了JSON的无效部分。

虽然我尝试了样品,并得到了输出如下:

+---+--------------------+----+-------------+ 
|COL|    DATA|IFAM|   KTM| 
+---+--------------------+----+-------------+ 
| 21|[[2,30,0,null], [...| EQR|1430006400000| 
+---+--------------------+----+-------------+ 

使用的代码如下:

object Test { 

    def main(args: Array[String]) { 
    val sparkConf = new SparkConf().setAppName("Json Test").setMaster("local[*]") 
    val sc = new SparkContext(sparkConf) 
    val sqlContext = new org.apache.spark.sql.SQLContext(sc) 
    import sqlContext.implicits._ 

    val path = "/home/test/Desktop/test.json" 
    val df = sqlContext.read.json(path) 
    df.show() 
    } 
} 
+0

JSON是正确的。我已经测试过它,并用上面的代码和json形成了数据框。 :) –

+0

我测试JSON的https://jsonlint.com/说这是不正确的。 – Sonu

+0

我更新的JSON是正确的,我认为管理员编辑错误的JSON部分,错误似乎不是关于JSON格式,它关于火花设置 – hpkong