您如何看待site中提到的问题4的答案是?Hadoop中的合并器,缩减器和生态系统工程
就是答案对错
问:4
In the standard word count MapReduce algorithm, why might using a combiner reduce theoverall Job running time?
A. Because combiners perform local aggregation of word counts, thereby allowing the mappers to process input data faster.
B. Because combinersperform local aggregation of word counts, thereby reducing the number of mappers that need to run.
C. Because combiners perform local aggregation of word counts, and then transfer that data toreducers without writing the intermediate data to disk.
D. Because combiners perform local aggregation of word counts, thereby reducing the number of key-value pairs that need to be snuff let across the network to the reducers.
Answer:A
和
问:3
What happens in a MapReduce job when you set the number of reducers to one?
A. A single reducer gathers and processes all the output from all the mappers. The output iswritten in as many separate files as there are mappers.
B. A single reducer gathers andprocesses all the output from all the mappers. The output iswritten to a single file in HDFS.
C. Setting the number of reducers to one creates a processing bottleneck, and since the number of reducers as specified by the programmer is used as a reference value only, the MapReduceruntime provides a default setting for the number of reducers.
D. Setting the number of reducers to one is invalid, and an exception is thrown.
Answer:A
从我的理解答案为t他上述问题
Question 4: D
Question 3: B
UPDATE
You have user profile records in your OLTP database,that you want to join with weblogs you have already ingested into HDFS.How will you obtain these user records?
Options
A. HDFS commands
B. Pig load
C. Sqoop import
D. Hive
Answer:B
和更新的问题,我的答案我doubtfull与乙和ç
编辑
正确答案:Sqoop。
+1指出了任何想投资那里的人...... – vefthym 2014-09-29 11:53:36
请参阅更新 – 2014-09-29 11:58:55