2017-04-26 91 views
0

据我了解sqoop,它推出使得与RDBMS的JDBC连接不同的数据节点上几个映射器。一旦形成连接,数据将被传输到HDFS。是否sqoop临时数据溢出到磁盘

只是想了解,是否sqoop映射器溢出数据临时磁盘(数据节点)上?我知道在MapReduce中发生溢出,但不知道sqoop作业。

回答

0

似乎在映射sqoop导入运行和不外溢。和sqoop合并上运行的map-reduce和不溢出。您可以在sqoop导入运行期间在Job tracker上查看它。

看一看sqoop导入日志的这部分,它不外溢,同时取出并写入到HDFS:

INFO [main] ... mapreduce.db.DataDrivenDBRecordReader: Using query: SELECT... 
[main] mapreduce.db.DBRecordReader: Executing query: SELECT... 
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1 
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 
INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy] 
INFO [Thread-16] ...mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 
INFO [main] org.apache.hadoop.mapred.Task: Task:attempt_1489705733959_2462784_m_000000_0 is done. And is in the process of committing 
INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_1489705733959_2462784_m_000000_0' to hdfs:// 

看一看这个sqoop合并日志(跳过某些行),它溢出(注意溢出地图输出在日志中):

INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://bla-bla/part-m-00000:0+48322717 
    ... 
    INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false 
    ... 
    INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 1024 
    INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 751619264 
    INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 1073741824 
    INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452; length = 67108864 
    INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$**MapOutputBuffer** 
    INFO [main] com.pepperdata.supervisor.agent.resource.r: Datanode bla-bla is LOCAL. 
    INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor [.snappy] 
    ... 
    INFO [main] org.apache.hadoop.mapred.MapTask: **Starting flush of map output** 
    INFO [main] org.apache.hadoop.mapred.MapTask: **Spilling map output** 
    INFO [main] org.apache.hadoop.mapred.MapTask: **bufstart** = 0; **bufend** = 184775274; bufvoid = 1073741824 
    INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 268435452(1073741808); kvend = 267347800(1069391200); length = 1087653/67108864 
    INFO [main] org.apache.hadoop.io.compress.CodecPool: Got brand-new compressor [.snappy] 
[main] org.apache.hadoop.mapred.MapTask: Finished spill 0 
    ...Task:attempt_1489705733959_2479291_m_000000_0 is done. And is in the process of committing