2017-06-16 80 views
1

我是Apache Spark的新手,试图从本地文件系统加载文件。我正在关注Hadoop-定义指南。Apache Spark:从本地加载文件而不是HDFS并加载本地文件给IllegalArguementException

这里有一个我已经设置环境变量:

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home 
export HADOOP_HOME=/Users/bng/Documents/hadoop-2.6.4 
export PATH=$PATH:$HADOOP_HOME/bin 
export PATH=$PATH:$HADOOP_HOME/sbin 
export HADOOP_PREFIX=/Users/bng/Documents/hadoop-2.6.4 
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" 
export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar 

export PATH=/usr/local/mysql/bin:/Users/bng/Documents/mongodb/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH 
export GOOGLE_APPLICATION_CREDENTIALS=/Users/bng/Downloads/googleCredentials 

export FLUME_HOME=/Users/bng/Documents/apache-flume-1.7.0-bin 
export PATH=$PATH:$FLUME_HOME/bin 

export SQOOP_HOME=/Users/bng/Documents/sqoop-1.4.6.bin__hadoop-2.0.4-alpha 
export PATH=$PATH:$SQOOP_HOME/bin 

export PIG_HOME=/Users/bng/Documents/pig-0.16.0 
export PATH=$PATH:$PIG_HOME/bin 

export HIVE_HOME=/Users/bng/Documents/apache-hive-1.2.2-bin 
export PATH=$PATH:$HIVE_HOME/bin 

export SPARK_HOME=/Users/bng/Documents/spark-1.6.3-bin-hadoop2.6 
export PATH=$PATH:$SPARK_HOME/bin 

在这里,许多事情我执行命令:

val lines = sc.textFile("Users/bng/Documents/hContent/input/ncdc/micro-tab/sample.txt"); 
val records = lines.map(_.split("\t")); 
val filters = records.filter(rec => (rec(1) != "9999" && rec(2).matches("[01459]"))); 
val tuples = filters.map(rec => (rec(0).toInt, rec(1).toInt)); 
val maxTemps = tuples.reduceByKey((a,b) => Math.max(a,b)); 
maxTemps.foreach(println(_)); 

以上sc.textFile命令已经在我的本地路径文件系统,但它是如何指向hdfs,为此我得到以下错误:

org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://172.**.**.168/user/KV/Users/bng/Documents/hContent/input/ncdc/micro-tab/sample.txt 

因此,我认为它会指向我的hdfs文件系统,所以我手动在hdfs中的“/ user/hive/warehouse/records”位置添加了一个文件,并试图执行以下命令: val lines = sc.textFile ( “/用户/蜂巢/仓库/记录”);

而且一切正常。

,但我想从加载本地系统文件,所以搜索就我发现我需要添加后的“file://” URI,所以我尝试使用下面的命令:

val localLines = sc.textFile("file://Users/bng/Documents/hContent/input/ncdc/micro-tab/sample.txt"); 
localLines.foreach(println(_)); 

但尽管如此,我得到了以下异常:

java.lang.IllegalArgumentException: Wrong FS: file://Users/bng/Documents/hContent/input/ncdc/micro-tab/sample.txt 
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80) 
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529) 
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747) 
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524) 
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409) 
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) 
at org.apache.hadoop.fs.Globber.glob(Globber.java:252) 
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1644) 
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257) 
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) 
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) 
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202) 
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
at scala.Option.getOrElse(Option.scala:120) 
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) 
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) 
at scala.Option.getOrElse(Option.scala:120) 
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) 
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929) 
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:912) 
at org.apache.spark.rdd.RDD$$anonfun$foreach$1.apply(RDD.scala:910) 
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) 
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) 
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) 
at org.apache.spark.rdd.RDD.foreach(RDD.scala:910) 
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30) 
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35) 
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37) 
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39) 
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:41) 
at $iwC$$iwC$$iwC.<init>(<console>:43) 
at $iwC$$iwC.<init>(<console>:45) 
at $iwC.<init>(<console>:47) 
at <init>(<console>:49) 
at .<init>(<console>:53) 
at .<clinit>(<console>) 
at .<init>(<console>:7) 
at .<clinit>(<console>) 
at $print(<console>) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:498) 
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) 
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346) 
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) 
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) 
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) 
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) 
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) 
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) 
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657) 
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665) 
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670) 
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997) 
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) 
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) 
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) 
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) 
at org.apache.spark.repl.Main$.main(Main.scala:31) 
at org.apache.spark.repl.Main.main(Main.scala) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:498) 
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 

请建议,这可能是这里的问题...

回答

3

我得到了抓,问题是用“文件://” URI。 “file://”的Inteasd,我需要使用“file:///”uri,一切正常。的

这一翻译:

val localLines = sc.textFile("file://Users/bng/Documents/hContent/input/ncdc/micro-tab/sample.txt"); 

我需要使用以下命令:

val localLines = sc.textFile("file:///Users/bng/Documents/hContent/input/ncdc/micro-tab/sample.txt"); 
1

您可以在开始

+0

我认为这仅仅使用

val localLines = sc.textFile("/Users/bng/Documents/hContent/input/ncdc/micro-tab/sample.txt"); 

没有file://会指向'hd fs'。 'file://'在新的火花版本中是必要的(1.6+也许) – philantrovert

+0

@philantrovert它是不正确的。我只用绝对路径来点火花2.0 +中的文件,没有问题 –

+0

也检查[文档](https://spark.apache.org/docs/latest/programming-guide.html#external-datasets) –