2017-08-02 83 views
-1

我可以通过hdfs dfs -ls /访问终端中的hdfs,我通过hdfs getconf -confKey fs.defaultFS(我指的是下面的代码中的地址和端口)获取群集的地址和端口。在java中访问hdfs抛出错误

试图阅读java中hdfs上的文件给了我类似的错误,如描述here(也在此questions中讨论)。 随着我尝试在java中下面的地址

 FileSystem fs; 
     BufferedReader br; 
     String line; 
     Path path = new Path("hdfs://<address>:<port>/somedata.txt"); 
     try 
     { 
      /* -------------------------- 
      * Option 1: Gave 'Wrong FS: hdfs://..., Expected file:///' error 
      Configuration configuration = new Configuration(); 
      configuration.addResource(new Path("/etc/hadoop/conf/core-site.xml")); 
      configuration.addResource(new Path("/etc/hadoop/conf/hdfs-site.xml")); 
      fs = FileSystem.get(configuration); 
      * --------------------------- 
      */ 

      // -------------------------- 
      // Option 2: Gives error stated below 
      Configuration configuration = new Configuration(); 
      fs = FileSystem.get(new URI("hdfs://<address>:<port>"),configuration); 
      // -------------------------- 

      LOG.info(fs.getConf().toString()); 

      FSDataInputStream fsDataInputStream = fs.open(path); 
      InputStreamReader inputStreamReader = new InputStreamReader(fsDataInputStream); 
      BufferedReader bufferedReader = new BufferedReader(inputStreamReader); 
      while((line=bufferedReader.readLine())!=null){ 
      // some file processing code here. 
      } 
      bufferedReader .close(); 
     } 
     catch (Exception e) 
     { 
      fail(); 
     } 

该选项2给我的错误是

java.lang.NoSuchMethodError: org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(Ljava/lang/String;)Ljava/net/InetSocketAddress; 
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:99) 
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446) 
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67) 
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464) 
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263) 
at fwt.gateway.Test_Runner.checkLocationMasterindicesOnHDFS(Test_Runner.java:76) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:498) 
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) 
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) 
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86) 
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) 
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459) 
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678) 
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382) 
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192) 

,我可以从终端访问这些文件的事实是一个迹象,我认为core-site.xmlhdfs-site.xml必须正确。

感谢您的帮助!

编辑1: 我用下面的代码Maven的依赖是以下

<dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-hdfs</artifactId> 
    <version>3.0.0-alpha4</version> 
    </dependency> 

    <dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-core</artifactId> 
    <version>1.2.1</version> 
    </dependency> 

    <dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-client</artifactId> 
    <version>3.0.0-alpha4</version> 
    </dependency> 
+0

看起来你可能会错过一些依赖关系。如果你使用maven或类似的,你可以分享你正在做的hadoop进口吗? – StefanE

+0

谢谢,更新。为什么downvoted? – tenticon

+0

可能是因为您不知道“NoSuchMethodDefError”是什么......并且它通常指向某种类型的**版本控制**冲突。图书馆A想要从图书馆B调用一个方法...但B中的那个方法不存在(更多)。在这个意义上说:你是否做过一些先前的研究,如寻找异常信息? – GhostCat

回答

0

更新您的POM到以下几点:

<dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-client</artifactId> 
    <version>2.8.1</version> 
</dependency> 
<dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-core</artifactId> 
    <version>2.6.0-mr1-cdh5.4.2.1</version> 
    <type>pom</type> 
</dependency> 
<dependency> 
    <groupId>org.apache.hadoop</groupId> 
    <artifactId>hadoop-hdfs</artifactId> 
    <version>2.8.1</version> 
</dependency> 

切勿使用Alpha版本,因为它们很可能有一个错误。

+0

非常感谢!必须包含CDH 5存储库才能下载罐子 – tenticon

0

可以在pom.xml文件中使用此

<dependency> 
     <groupId>org.apache.hadoop</groupId> 
     <artifactId>hadoop-common</artifactId> 
     <version>2.6.0</version> 
    </dependency> 

我已经使用版本2.6.0 。你可以尝试任何更新的版本。