2012-04-17 62 views
3

上周我使用用户“root”启动Hadoop的dreds,并运行嵌入式猪Java代码。它运行良好。 本周我想通过使用非root用户执行相同的任务:charlie。 更改了几个目录的用户权限设置后,现在我可以使用用户“charlie”启动Hadoop的mapreduce,并且没有任何错误。 然而,当我使用用户“查理”运行嵌入式猪的Java代码,它不断抱怨hadoop.tmp.dir我在核心设置为的/ opt/HDFS的/ tmp/许可-stie.xml权限在Hadoop上以Java运行嵌入式猪时发生错误

java.io.FileNotFoundException:/opt/hdfs/tmp/mapred/local/localRunner/job_local_0001.xml(拒绝)

我检查的权限对于以下目录,它们都很好看:

bash-3.2$ ls -lt /opt/hdfs/tmp 
    total 4 
    drwxr-xr-x 3 charlie comusers 4096 Apr 16 19:30 mapred 
    bash-3.2$ ls -lt /opt/hdfs/tmp/mapred 
    total 4 
    drwxr-xr-x 2 charlie comusers 4096 Apr 16 19:30 local 
    bash-3.2$ ls -lt /opt/hdfs/tmp/mapred/local 
    total 0 

我需要一些指导,说明我做错了什么。我搜索了这些关键词,但没有发现任何内容。任何帮助,将不胜感激!

我已经附上猪的产量如下。希望信息会有所帮助。

12/04/16 19:31:28 INFO executionengine.HExecutionEngine: Connecting to hadoop file system at: hdfs://hadoop-namenode:9000 
12/04/16 19:31:29 INFO pigstats.ScriptState: Pig features used in the script: HASH_JOIN,GROUP_BY,FILTER,CROSS 
12/04/16 19:31:29 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 
12/04/16 19:31:29 INFO mapReduceLayer.MRCompiler: File concatenation threshold: 100 optimistic? false 
12/04/16 19:31:30 INFO mapReduceLayer.CombinerOptimizer: Choosing to move algebraic foreach to combiner 
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage 
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage 
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage 
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage 
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage 
12/04/16 19:31:30 INFO mapReduceLayer.MRCompiler$LastInputStreamingOptimizer: Rewrite: POPackage->POForEach to POJoinPackage 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size before optimization: 11 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 3 MR operators. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 3 MR operators. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 map-reduce splittees. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 3 MR operators. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 2 MR operators. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 0 out of total 2 MR operators. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 1 map-reduce splittees. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: Merged 1 out of total 3 MR operators. 
12/04/16 19:31:30 INFO mapReduceLayer.MultiQueryOptimizer: MR plan size after optimization: 10 
12/04/16 19:31:30 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 
12/04/16 19:31:30 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 
12/04/16 19:31:30 INFO pigstats.ScriptState: Pig script settings are added to the job 
12/04/16 19:31:30 WARN pigstats.ScriptState: unable to read pigs manifest file 
12/04/16 19:31:30 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 
12/04/16 19:31:35 INFO mapReduceLayer.JobControlCompiler: Setting up multi store job 
12/04/16 19:31:35 INFO mapReduceLayer.JobControlCompiler: BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=957600 
12/04/16 19:31:35 INFO mapReduceLayer.JobControlCompiler: Neither PARALLEL nor default parallelism is set for this job. Setting number of reducers to 1 
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 
12/04/16 19:31:35 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 
12/04/16 19:31:35 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 
12/04/16 19:31:35 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 
12/04/16 19:31:35 INFO input.FileInputFormat: Total input paths to process : 1 
12/04/16 19:31:35 INFO util.MapRedUtil: Total input paths to process : 1 
12/04/16 19:31:35 INFO util.MapRedUtil: Total input paths (combined) to process : 1 
12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: 0% complete 
12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: job null has failed! Stop running all dependent jobs 
12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: 100% complete 
12/04/16 19:31:36 WARN mapReduceLayer.Launcher: There is no log file to write to. 
12/04/16 19:31:36 ERROR mapReduceLayer.Launcher: Backend error message during job submission 
java.io.FileNotFoundException: /opt/hdfs/tmp/mapred/local/localRunner/job_local_0001.xml (Permission denied) 
    at java.io.FileOutputStream.open(Native Method) 
    at java.io.FileOutputStream.<init>(FileOutputStream.java:194) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:180) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:176) 
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:234) 
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:335) 
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:368) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:465) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:372) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:208) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142) 
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216) 
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:92) 
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373) 
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800) 
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730) 
    at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378) 
    at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247) 
    at org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279) 
    at java.lang.Thread.run(Thread.java:662) 

12/04/16 19:31:36 ERROR pigstats.SimplePigStats: ERROR 2997: Unable to recreate exception from backend error: java.io.FileNotFoundException: /opt/hdfs/tmp/mapred/local/localRunner/job_local_0001.xml (Permission denied) 
12/04/16 19:31:36 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 
12/04/16 19:31:36 WARN pigstats.ScriptState: unable to read pigs manifest file 
12/04/16 19:31:36 INFO pigstats.SimplePigStats: Script Statistics: 

HadoopVersion PigVersion UserId StartedAt FinishedAt Features 
0.20.2  charlie 2012-04-16 19:31:30 2012-04-16 19:31:36 HASH_JOIN,GROUP_BY,FILTER,CROSS 

Failed! 

Failed Jobs: 
JobId Alias Feature Message Outputs 
N/A events,events1,grouped MULTI_QUERY Message: java.io.FileNotFoundException: /opt/hdfs/tmp/mapred/local/localRunner/job_local_0001.xml (Permission denied) 
    at java.io.FileOutputStream.open(Native Method) 
    at java.io.FileOutputStream.<init>(FileOutputStream.java:194) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:180) 
    at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:176) 
    at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:234) 
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:335) 
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:368) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:484) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:465) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:372) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:208) 
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142) 
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216) 
    at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:92) 
    at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:373) 
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:800) 
    at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730) 
    at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:378) 
    at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:247) 
    at org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:279) 
    at java.lang.Thread.run(Thread.java:662) 


Input(s): 
Failed to read data from "/grapevine/analysis/recommendation/input/article_based/all_grapevine_events.txt" 

Output(s): 

Counters: 
Total records written : 0 
Total bytes written : 0 
Spillable Memory Manager spill count : 0 
Total bags proactively spilled: 0 
Total records proactively spilled: 0 

Job DAG: 
null -> null,null, 
null -> null, 
null -> null, 
null -> null,null, 
null -> null, 
null -> null,null, 
null -> null, 
null -> null, 
null -> null, 
null 


12/04/16 19:31:36 INFO mapReduceLayer.MapReduceLauncher: Failed! 
+0

的文件路径可能是为HDFS而不是您的本地文件系统。 – root1982 2012-04-18 21:01:15

+0

用户是在本地机器上运行的namenode/datanode/jobtracker/tasktrackers服务?我认为您在本地作业提交和使用本地文件系统时并不相关。我认为猪正在像查理用户那样跑步。怎么样创建localRunner子目录?递归地使用 搭配chmod -R youruser FOLDERNAME 它不会给错误的文件夹的 – 2012-04-19 23:54:17

+2

变化所有者 – Infinity 2012-05-15 13:41:56

回答

0

答案已经张贴在评论:

文件夹的

变化所有者递归使用chmod -R youruser FOLDERNAME它不会给错误