我通过sqoop命令将数据从mysql成功上传到HDFS。namenode -format删除hdfs文件
的MySQL运行Hadoop集群有
1节点为Namename 1节点为二级的NameNode 1节点为JobTracker的 3个节点为Datanade +的TaskTracker
后,我停止了Hadoop集群。
并再次启动的Hadoop
使用以下命令
namenode -format (start NameNode)
place new VERSION number in all datanode VERSION FILE
now START DATANODE
而在HDFS开始的DataNode我上传的MySQL数据似乎丢失。
以下是datanode日志的输出。
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_2445513848423894029_1337 at file /app/hadoop/data/dn/current/blk_2445513848423894029
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_3541234094053021888_1338 at file /app/hadoop/data/dn/current/blk_3541234094053021888
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_3862391472172526583_1347 at file /app/hadoop/data/dn/current/blk_3862391472172526583
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_4001223662527683746_1387 at file /app/hadoop/data/dn/current/blk_4001223662527683746
2014-05-15 07:46:56,018 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_4143551839757190038_1410 at file /app/hadoop/data/dn/current/subdir14/blk_4143551839757190038
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_5292612097544035620_1384 at file /app/hadoop/data/dn/current/blk_5292612097544035620
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_5318982235915332439_1333 at file /app/hadoop/data/dn/current/blk_5318982235915332439
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_5806860765395122737_1388 at file /app/hadoop/data/dn/current/blk_5806860765395122737
2014-05-15 07:46:56,019 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_6490571696460682483_1302 at file /app/hadoop/data/dn/current/blk_6490571696460682483
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_7721528058087862562_1336 at file /app/hadoop/data/dn/current/blk_7721528058087862562
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_7734832800955956873_1375 at file /app/hadoop/data/dn/current/blk_7734832800955956873
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8691928504867292802_1297 at file /app/hadoop/data/dn/current/blk_8691928504867292802
2014-05-15 07:46:56,020 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8861743153245195509_1303 at file /app/hadoop/data/dn/current/blk_8861743153245195509
2014-05-15 07:46:56,021 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8921828525927242630_1300 at file /app/hadoop/data/dn/current/blk_8921828525927242630
2014-05-15 07:46:56,021 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_8938258584084219299_1344 at file
请指出一个**具体问题**,让用户确切知道你想要什么帮助。 – Daniel
你想要什么......请添加更多关于你的错误的信息 –
准确的第三步是什么? “在所有datanode VERSION FILE中放置新的VERSION号码”? – vefthym