2
我想运行一个MapReduce流式作业,它从s3存储桶中匹配给定模式的目录中获取输入文件。该模式类似bucket-name/[date]/product/logs/[hour]/[logfilename]
。一个例子日志可能会在bucket-name/2013-05-02/product/logs/05/log123456789
之后。亚马逊弹性MapReduce的模式匹配输入文件
我可以通过只传递文件名的小时部分作为通配符来工作。例如:bucket-name/2013-05-02/product/logs/*/
。这成功地从每个小时挑选每个日志文件,并将它们单独传递给映射器。
问题出现在我尝试还使日期为通配符,例如:bucket-name/*/product/logs/*/
。当我这样做时,作业被创建,但没有创建任务,最终失败。这个错误打印在系统日志中。
2013-05-02 08:03:41,549 ERROR org.apache.hadoop.streaming.StreamJob (main): Job not successful. Error: Job initialization failed:
java.lang.OutOfMemoryError: Java heap space
at java.util.regex.Matcher.<init>(Matcher.java:207)
at java.util.regex.Pattern.matcher(Pattern.java:888)
at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:378)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:418)
at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:523)
at org.apache.hadoop.mapred.SkipBadRecords.getMapperMaxSkipRecords(SkipBadRecords.java:247)
at org.apache.hadoop.mapred.TaskInProgress.<init>(TaskInProgress.java:146)
at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:722)
at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:4238)
at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2013-05-02 08:03:41,549 INFO org.apache.hadoop.streaming.StreamJob (main): killJob...
是的,我通过rbenv使用1.8.7-p370,并没有1.9.x的好运 – 2013-05-02 21:09:43