2014-09-29 70 views
0

我正在使用Cent os-6并使用cloudera cdh4.7。当我尝试使用proxt http://xxx.xxx.xxx:50070从浏览器浏览文件系统时。我收到以下错误给出如下:Cloudera Hadoop 500错误

HTTP ERROR 500 
Problem accessing /nn_browsedfscontent.jsp. Reason: 
    Cannot issue delegation token. Name node is in safe mode. 
Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode.. 
Caused by: 
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot issue delegation token. Name node is in safe mode. 
Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE: If you turn off safe mode before adding resources, the NN will immediately return to safe mode.. 
     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5450) 
     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:392) 
     at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper$1.run(NamenodeJspHelper.java:435) 
     at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper$1.run(NamenodeJspHelper.java:432) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:416) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438) 
     at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.getDelegationToken(NamenodeJspHelper.java:431) 
     at org.apache.hadoop.hdfs.server.namenode.NamenodeJspHelper.redirectToRandomDataNode(NamenodeJspHelper.java:462) 
     at org.apache.hadoop.hdfs.server.namenode.nn_005fbrowsedfscontent_jsp._jspService(nn_005fbrowsedfscontent_jsp.java:70) 
     at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98) 
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) 
     at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) 
     at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) 
     at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) 
     at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) 
     at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1069) 
     at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) 
     at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) 
     at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) 
     at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) 
     at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) 
     at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) 
     at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) 
     at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) 
     at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) 
     at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450) 
     at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) 
     at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) 
     at org.mortbay.jetty.Server.handle(Server.java:326) 
     at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) 
     at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) 
     at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) 
     at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) 
     at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) 
     at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410) 
     at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) 

我用下面给出的语句从安全模式删除: “命令-u HDFS dfsadmin -safemode离开”,通过使用这种说法也有不用找了。

请帮我渡过这个障碍。

+1

检查您的Namenode是否耗尽主内存。 – Shekhar 2014-09-29 05:41:38

+0

尝试重新启动您的namenode和其他服务。然后如果可能,再格式化namenode再试一次 – 2014-09-29 05:55:27

回答

0

您的Namenode处于保存模式。

你需要离开那个。

hadoop dfsadmin -safemode leave 

这里是Explanation

+0

执行以下步骤之后。1)重新启动数据节点,名称节点,辅助名称节点。 2)格式化名称节点。 3)开始在cd /etc/init.d中运行x的hdfs; ls hadoop-hdfs- *';做sudo服务$ x start;那么,我已经声明为“sudo -u hdfs hadoop fs -mkdir/tmp”。我在下面给出了以下错误:“mkdir:无法创建目录/ tmp。名称节点处于安全模式”。 – user3292373 2014-10-01 03:41:07

+0

只是做“hadoop dfsadmin - 安全模式离开”。无需重启任何东西 – 2014-10-01 05:15:29

0

确保您的存储是不完整的。删除一些文件并腾出空间。否则系统会再次进入安全模式。