文档首页> MapReduce服务 MRS> 组件操作指南(LTS版)> 使用HDFS> HDFS常见问题> 为什么主NameNode重启后系统出现双备现象
更新时间:2023-04-27 GMT+08:00
分享

为什么主NameNode重启后系统出现双备现象

问题

为什么主NameNode重启后系统出现双备现象?

出现该问题时,查看Zookeeper和ZKFC的日志,发现Zookeeper服务端与客户端(ZKFC)通信时所使用的session不一致,Zookeeper服务端的sessionId为0x164cb2b3e4b36ae4,ZKFC的sessionId为0x144cb2b3e4b36ae4。这意味着Zookeeper服务端与客户端(ZKFC)之间数据交互失败。

Zookeeper日志,如下所示:

2015-04-15 21:24:54,257 | INFO | CommitProcessor:22 | Established session 0x164cb2b3e4b36ae4 with negotiated timeout 45000 for client /192.168.0.117:44586 | org.apache.zookeeper.server.ZooKeeperServer.finishSessionInit(ZooKeeperServer.java:623)
2015-04-15 21:24:54,261 | INFO | NIOServerCxn.Factory:192-168-0-114/192.168.0.114:2181 | Successfully authenticated client: authenticationID=hdfs/hadoop@<系统域名>; authorizationID=hdfs/hadoop@<系统域名>. | org.apache.zookeeper.server.auth.SaslServerCallbackHandler.handleAuthorizeCallback(SaslServerCallbackHandler.java:118)
2015-04-15 21:24:54,261 | INFO | NIOServerCxn.Factory:192-168-0-114/192.168.0.114:2181 | Setting authorizedID: hdfs/hadoop@<系统域名> | org.apache.zookeeper.server.auth.SaslServerCallbackHandler.handleAuthorizeCallback(SaslServerCallbackHandler.java:134)
2015-04-15 21:24:54,261 | INFO | NIOServerCxn.Factory:192-168-0-114/192.168.0.114:2181 | adding SASL authorization for authorizationID: hdfs/hadoop@<系统域名> | org.apache.zookeeper.server.ZooKeeperServer.processSasl(ZooKeeperServer.java:1009)
2015-04-15 21:24:54,262 | INFO | ProcessThread(sid:22 cport:-1): | Got user-level KeeperException when processing sessionid:0x164cb2b3e4b36ae4 type:create cxid:0x3 zxid:0x20009fafc txntype:-1 reqpath:n/a Error Path:/hadoop-ha/hacluster/ActiveStandbyElectorLock Error:KeeperErrorCode = NodeExists for /hadoop-ha/hacluster/ActiveStandbyElectorLock | org.apache.zookeeper.server.PrepRequestProcessor.pRequest(PrepRequestProcessor.java:648)

ZKFC日志,如下所示:

2015-04-15 21:24:54,237 | INFO | main-SendThread(192-168-0-114:2181) | Socket connection established to 192-168-0-114/192.168.0.114:2181, initiating session | org.apache.zookeeper.ClientCnxn$SendThread.primeConnection(ClientCnxn.java:854)
2015-04-15 21:24:54,257 | INFO | main-SendThread(192-168-0-114:2181) | Session establishment complete on server 192-168-0-114/192.168.0.114:2181, sessionid = 0x144cb2b3e4b36ae4 , negotiated timeout = 45000 | org.apache.zookeeper.ClientCnxn$SendThread.onConnected(ClientCnxn.java:1259)
2015-04-15 21:24:54,260 | INFO | main-EventThread | EventThread shut down | org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:512)
2015-04-15 21:24:54,262 | INFO | main-EventThread | Session connected. | org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:547)
2015-04-15 21:24:54,264 | INFO | main-EventThread | Successfully authenticated to ZooKeeper using SASL. | org.apache.hadoop.ha.ActiveStandbyElector.processWatchEvent(ActiveStandbyElector.java:573)

回答

  • 原因分析

    NameNode的主节点重启后,它原先在Zookeeper上建立的临时节点(/hadoop-ha/hacluster/ActiveStandbyElectorLock)就会被清理。同时,NameNode备节点发现这个信息后进行抢占希望升主,所以它重新在Zookeeper上建立了active的节点/hadoop-ha/hacluster/ActiveStandbyElectorLock。但是NameNode备节点通过客户端(ZKFC)与Zookeeper建立连接时,由于网络问题、CPU使用率高、集群压力大等原因,出现了客户端(ZKFC)的session(0x144cb2b3e4b36ae4)与Zookeeper服务端的session(0x164cb2b3e4b36ae4)不一致的问题,这就导致了NameNode备节点的watcher没有感知到自己已经成功建立临时节点,依然认为自己还是备。 而NameNode主节点启动后,发现/hadoop-ha/hacluster目录下已经有active的节点,所以也无法升主,导致两个节点都为备。

  • 解决方法

    建议通过在FusionInsight Manager界面上重启HDFS的两个ZKFC加以解决。

分享:

    相关文档

    相关产品