2017-03-06 196 views
0

我目前正在接受一个大数据类,我的一个项目是在本地设置的Hadoop集群上运行我的Mapper/Reducer。如何使用Hadoop Streaming在本地Hadoop集群中运行MRJob?

我一直在使用Python以及类的MRJob库。

这是我目前用于Mapper/Reducer的Python代码。

from mrjob.job import MRJob 
from mrjob.step import MRStep 
import re 
import os 

WORD_RE = re.compile(r"[\w']+") 
choice = "" 

class MRPrepositionsFinder(MRJob): 

def steps(self): 
    return [ 
     MRStep(mapper=self.mapper_get_words), 
     MRStep(reducer=self.reducer_find_prep_word) 
    ] 

def mapper_get_words(self, _, line): 
    # set word_list to indicators, convert to lowercase, and strip whitespace 
    word_list = set(line.lower().strip() for line in open("/hdfs/user/user/indicators.txt")) 

    # set filename to map_input_file 
    fileName = os.environ['map_input_file'] 
    # itterate through each word in line 
    for word in WORD_RE.findall(line): 
     # if word is in indicators, yield chocie as filename 
     if word.lower() in word_list: 
      choice = fileName.split('/')[5] 
      yield (choice, 1) 

def reducer_find_prep_word(self, choice, counts): 
    # each item of choice is (choice, count), 
    # so yielding results in value=choice, key=count 
    yield (choice, sum(counts)) 


if __name__ == '__main__': 
MRPrepositionsFinder.run() 

当我尝试在我的Hadoop集群运行的代码 - 我用下面的命令:

 
python hrc_discover.py /hdfs/user/user/HRCmail/* -r hadoop --hadoop-bin /usr/bin/hadoop > /hdfs/user/user/output 

不幸的是,每次我跑我得到以下错误的命令:

 
No configs found; falling back on auto-configuration 
STDERR: Error: JAVA_HOME is not set and could not be found. 
Traceback (most recent call last): 
    File "hrc_discover.py", line 37, in 
    MRPrepositionsFinder.run() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/job.py", line 432, in run 
    mr_job.execute() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/job.py", line 453, in execute 
    super(MRJob, self).execute() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/launch.py", line 161, in execute 
    self.run_job() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/launch.py", line 231, in run_job 
    runner.run() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/runner.py", line 437, in run 
    self._run() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/hadoop.py", line 346, in _run 
    self._find_binaries_and_jars() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/hadoop.py", line 361, in _find_binaries_and_jars 
    self.get_hadoop_version() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/hadoop.py", line 198, in get_hadoop_version 
    return self.fs.get_hadoop_version() 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/fs/hadoop.py", line 117, in get_hadoop_version 
    stdout = self.invoke_hadoop(['version'], return_stdout=True) 
    File "/usr/lib/python3.5/site-packages/mrjob-0.6.0.dev0-py3.5.egg/mrjob/fs/hadoop.py", line 172, in invoke_hadoop 
    raise CalledProcessError(proc.returncode, args) 
subprocess.CalledProcessError: Command '['/usr/bin/hadoop', 'version']' returned non-zero exit status 1 

我环顾了互联网,发现我需要导出我的JAVA_HOME变量 - 但我不想设置任何可能会破坏我的设置的东西。

任何帮助,将不胜感激,谢谢!

回答

0

这似乎是在etc/hadoop/hadoop-env.sh脚本文件中的问题。

JAVA_HOME环境变量被配置为:

export JAVA_HOME=$(JAVA_HOME) 

所以,我继续把它改成如下:

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk 

我试图再次运行下面的命令,在希望它能起作用:

python hrc_discover.py /hdfs/user/user/HRCmail/* -r hadoop --hadoop-bin /usr/bin/hadoop > /hdfs/user/user/output 

值得庆幸的是MRJob上拿起JAVA_HOME环境并导致下面的输出:

No configs found; falling back on auto-configuration 
Using Hadoop version 2.7.3 
Looking for Hadoop streaming jar in /home/hadoop/contrib... 
Looking for Hadoop streaming jar in /usr/lib/hadoop-mapreduce... 
Hadoop streaming jar not found. Use --hadoop-streaming-jar 
Creating temp directory /tmp/hrc_discover.user.20170306.022649.449218 
Copying local files to hdfs:///user/user/tmp/mrjob/hrc_discover.user.20170306.022649.449218/files/... 
.. 

要解决与Hadoop的流罐子的问题,我添加了以下切换到命令:

--hadoop-streaming-jar /usr/lib/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar 

的完整的命令看起来像如下:

python hrc_discover.py /hdfs/user/user/HRCmail/* -r hadoop --hadoop-streaming-jar /usr/lib/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar --hadoop-bin /usr/bin/hadoop > /hdfs/user/user/output 

以下的输出是其结果:

No configs found; falling back on auto-configuration 
Using Hadoop version 2.7.3 
Creating temp directory /tmp/hrc_discover.user.20170306.022649.449218 
Copying local files to hdfs:///user/user/tmp/mrjob/hrc_discover.user.20170306.022649.449218/files/... 

看来问题已经解决,Hadoop应该处理我的工作。