2014-11-01 102 views
0

我在运行Hadoop作业时遇到问题,即使文件存在,但在尝试从分布式缓存中检索文件时收到FileNotFoundException。当我在本地文件系统上运行它时,它可以工作。从Hadoop分布式缓存中读取文件时FileNotFoundExcepton

集群托管在Amazon Web Services上,使用Hadoop版本1.0.4和Java版本1.7。我对集群没有任何控制权,也没有对集群的设置。

在主函数中,我将文件添加到分布式缓存中。这似乎工作正常。我认为,至少它不会抛出任何例外。

.... 
JobConf conf = new JobConf(Driver.class); 
conf.setJobName("mean"); 
conf.set("lookupfile", args[2]); 
Job job = new Job(conf); 
DistributedCache.addCacheFile(new Path(args[2]).toUri(), conf); 
... 

在设置功能被称为地图之前,我创建的文件的路径,并调用该文件加载到一个哈希表的功能。

Configuration conf = context.getConfiguration(); 
String inputPath = conf.get("lookupfile");       
Path dataFile = new Path(inputPath); 
loadHashMap(dataFile, context); 

加载哈希映射的函数的第一行发生异常。

brReader = new BufferedReader(new FileReader(filePath.toString())); 

我开始这样的工作。

hadoop jar Driver.jar Driver /tmp/input output /tmp/DATA.csv 

我收到以下错误

Error: Found class org.apache.hadoop.mapreduce.Counter, but interface was expected 
attempt_201410300715_0018_m_000000_0: java.io.FileNotFoundException: /tmp/DATA.csv (No such file or directory) 
attempt_201410300715_0018_m_000000_0: at java.io.FileInputStream.open(Native Method) 
attempt_201410300715_0018_m_000000_0: at java.io.FileInputStream.<init>(FileInputStream.java:146) 
attempt_201410300715_0018_m_000000_0: at java.io.FileInputStream.<init>(FileInputStream.java:101) 
attempt_201410300715_0018_m_000000_0: at java.io.FileReader.<init>(FileReader.java:58) 
attempt_201410300715_0018_m_000000_0: at Map.loadHashMap(Map.java:49) 
attempt_201410300715_0018_m_000000_0: at Map.setup(Map.java:98) 
attempt_201410300715_0018_m_000000_0: at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) 
attempt_201410300715_0018_m_000000_0: at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:771) 
attempt_201410300715_0018_m_000000_0: at org.apache.hadoop.mapred.MapTask.run(MapTask.java:375) 
attempt_201410300715_0018_m_000000_0: at org.apache.hadoop.mapred.Child$4.run(Child.java:259) 
attempt_201410300715_0018_m_000000_0: at java.security.AccessController.doPrivileged(Native Method) 
attempt_201410300715_0018_m_000000_0: at javax.security.auth.Subject.doAs(Subject.java:415) 
attempt_201410300715_0018_m_000000_0: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140) 
attempt_201410300715_0018_m_000000_0: at org.apache.hadoop.mapred.Child.main(Child.java:253) 
14/11/01 02:12:49 INFO mapred.JobClient: Task Id : attempt_201410300715_0018_m_000001_0, Status : FAILED 

我已经验证该文件存在,无论是在HDFS和本地文件系统上。

[email protected]:~$ hadoop fs -ls /tmp 
Found 2 items 
drwxr-xr-x - hadoop supergroup   0 2014-10-30 11:19 /tmp/input 
-rw-r--r-- 1 hadoop supergroup  428796 2014-10-30 11:19 /tmp/DATA.csv 

[email protected]:~$ ls -al /tmp/ 
-rw-r--r-- 1 hadoop hadoop 428796 Oct 30 11:30 DATA.csv 

我真的不明白这里有什么问题。例外列出了该文件的正确路径。我已经验证该文件存在于HDFS和本地文件系统上。有什么我在这里失踪?

回答

0

BufferedReader的输入应该来自Setup()中的DistributedCache.getLocalCacheFiles()返回的路径。更多类似..

Path[] localFiles = DistributedCache.getLocalCacheFiles(); 
if (localFiles.length > 0){ 
    brReader = new BufferedReader(new FileReader(localFiles[0].toString());  
} 
0

我面临同样的问题,下面的代码为我工作:

Configuration conf = context.getConfiguration(); 
URI[] uriList = DistributedCache.getCacheFiles(conf); 
BufferedReader br = new BufferedReader(new FileReader(uriList[0].getPath())) 

正如你可以看到我使用的方法getCacheFiles这里,然后获取文件的路径和读取文件。