2014-05-15 47 views
0

我通过sqoop命令将数据从mysql成功上传到HDFS。namenode -format删除hdfs文件

的MySQL运行Hadoop集群有

1节点为Namename 1节点为二级的NameNode 1节点为JobTracker的 3个节点为Datanade +的TaskTracker

后,我停止了Hadoop集群。

并再次启动的Hadoop

使用以下命令

namenode -format (start NameNode) 

    place new VERSION number in all datanode VERSION FILE 

    now START DATANODE 

而在HDFS开始的DataNode我上传的MySQL数据似乎丢失。

以下是datanode日志的输出。

2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_2445513848423894029_1337 at file /app/hadoop/data/dn/current/blk_2445513848423894029 
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_3541234094053021888_1338 at file /app/hadoop/data/dn/current/blk_3541234094053021888 
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_3862391472172526583_1347 at file /app/hadoop/data/dn/current/blk_3862391472172526583 
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_4001223662527683746_1387 at file /app/hadoop/data/dn/current/blk_4001223662527683746 
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_4143551839757190038_1410 at file /app/hadoop/data/dn/current/subdir14/blk_4143551839757190038 
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_5292612097544035620_1384 at file /app/hadoop/data/dn/current/blk_5292612097544035620 
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_5318982235915332439_1333 at file /app/hadoop/data/dn/current/blk_5318982235915332439 
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_5806860765395122737_1388 at file /app/hadoop/data/dn/current/blk_5806860765395122737 
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_6490571696460682483_1302 at file /app/hadoop/data/dn/current/blk_6490571696460682483 
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_7721528058087862562_1336 at file /app/hadoop/data/dn/current/blk_7721528058087862562 
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_7734832800955956873_1375 at file /app/hadoop/data/dn/current/blk_7734832800955956873 
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8691928504867292802_1297 at file /app/hadoop/data/dn/current/blk_8691928504867292802 
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8861743153245195509_1303 at file /app/hadoop/data/dn/current/blk_8861743153245195509 
2014-05-15 07:46:56,021 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8921828525927242630_1300 at file /app/hadoop/data/dn/current/blk_8921828525927242630 
2014-05-15 07:46:56,021 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8938258584084219299_1344 at file 
+0

请指出一个**具体问题**,让用户确切知道你想要什么帮助。 – Daniel

+0

你想要什么......请添加更多关于你的错误的信息 –

+0

准确的第三步是什么? “在所有datanode VERSION FILE中放置新的VERSION号码”? – vefthym

回答

0

我的问题如下

当我改变的NameNode的新版本中的所有数据节点。并开始数据节点,所有上传的RDBMS的数据与下面的错误日志文件中删除“的文件中删除blk_6490571696460682483_1302 /应用/ Hadoop的/数据/ dn /电流/ blk_6490571696460682483”

我的问题是,当我们使用Haddop FS -format命令所有数据格式化,是否有办法恢复数据

+0

请编辑您的问题以提供更多信息,并且不要使用本节进行评论。 – Tariq

+0

在原始问题中添加更多信息 – chintoo02018