Member since
09-16-2015
9
Posts
0
Kudos Received
0
Solutions
10-26-2015
12:57 AM
Also got this error from the log WARN [QuorumPeer:/0:0:0:0:0:0:0:0:2181:Follower@8] - Exception when following the leader java.io.IOException: Error: Epoch of leader is lower at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:73) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:644) INFO [QuorumPeer:/0:0:0:0:0:0:0:0:2181:Follower@16] - shutdown called java.lang.Exception: shutdown Follower at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:648) INFO [QuorumPeer:/0:0:0:0:0:0:0:0:2181:QuorumPeer@62] - LOOKING INFO [QuorumPeer:/0:0:0:0:0:0:0:0:2181:FileSnap@82] - Reading snapshot
... View more
10-26-2015
12:29 AM
Hello Guys, I am getting the below error in zookeeper log in one of my three zookeeper servers. The service status is running. For this server the telnet connections is getting closed, which seems fine in othre servers. ------ Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running- ------ Please help.
... View more
Labels:
- Labels:
-
Apache Zookeeper
10-01-2015
03:09 AM
Here is the output. It seems like the port is refusing the connection. There is no firewall restriction. What could be the possible reason ? . Thanks ======== root@ip-xxx.xxx.xxx.yyy:~# sudo -E -u hbase hbase hbck 15/10/01 09:42:06 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of successful kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops) 15/10/01 09:42:06 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of failed kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops) 15/10/01 09:42:06 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[GetGroups], always=false, type=DEFAULT, sampleName=Ops) 15/10/01 09:42:06 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics 15/10/01 09:42:06 DEBUG util.KerberosName: Kerberos krb5 configuration not found, setting default realm to empty 15/10/01 09:42:06 DEBUG security.Groups: Creating new Groups object 15/10/01 09:42:06 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 15/10/01 09:42:06 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library 15/10/01 09:42:06 DEBUG security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution 15/10/01 09:42:06 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping 15/10/01 09:42:06 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 15/10/01 09:42:06 DEBUG security.UserGroupInformation: hadoop login 15/10/01 09:42:06 DEBUG security.UserGroupInformation: hadoop login commit 15/10/01 09:42:06 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hbase 15/10/01 09:42:06 DEBUG security.UserGroupInformation: UGI loginUser:hbase (auth:SIMPLE) 15/10/01 09:42:08 DEBUG hdfs.NameNodeProxies: multipleLinearRandomRetry = null 15/10/01 09:42:08 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWritable, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@43ebf1ca 15/10/01 09:42:10 WARN conf.Configuration: fs.default.name is deprecated. Instead, use fs.defaultFS 15/10/01 09:42:10 DEBUG ipc.Client: The ping interval is 60000 ms. 15/10/01 09:42:10 DEBUG ipc.Client: Use SIMPLE authentication for protocol ClientNamenodeProtocolPB 15/10/01 09:42:10 DEBUG ipc.Client: Connecting to ip-xxx.xxx.xxx.yyy.eu-west-1.compute.internal/xxx.xxx.xxx.yyy:8020 15/10/01 09:42:10 DEBUG ipc.Client: closing ipc connection to ip-xxx.xxx.xxx.yyy.eu-west-1.compute.internal/xxx.xxx.xxx.yyy:8020: Connection refused java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:510) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:604) at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1291) at org.apache.hadoop.ipc.Client.call(Client.java:1209) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) at com.sun.proxy.$Proxy10.getListing(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) at com.sun.proxy.$Proxy10.getListing(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:441) at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1526) at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1509) at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:406) at org.apache.hadoop.hbase.util.HBaseFsck.preCheckPermission(HBaseFsck.java:1430) at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3653) at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3502) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3493) 15/10/01 09:42:10 DEBUG ipc.Client: IPC Client (1163306898) connection to ip-xxx.xxx.xxx.yyy.eu-west-1.compute.internal/xxx.xxx.xxx.yyy:8020 from hbase: closed 15/10/01 09:42:10 DEBUG ipc.Client: Stopping client ========
... View more
09-29-2015
02:16 AM
Here is the slave configuration. How can I add gateway role?. Thanks ====================== root@slave1:/etc/hbase/conf# cat hbase-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://slave1/hbase_cdh471</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>zookeeper1,zookeeper2,zookeeper3</value> </property> <property> <name>zookeeper.znode.parent</name> <value>/hbase_cdh471</value> </property> <property> <name>hbase.regionserver.handler.count</name> <value>10</value> </property> <property> <name>hbase.hregion.max.filesize</name> <value>4294967296</value> </property> <property> <name>hbase.regionserver.ipc.address</name> <value>slave1</value> </property> <property> <name>hbase.regionserver.thread.compaction.small</name> <value>1</value> </property> <property> <name>hbase.regionserver.thread.compaction.large</name> <value>1</value> </property> </configuration> ================================
... View more
09-29-2015
01:27 AM
Unable to perform hbck from datanodes. The hbck is working from namenode. Hbase version is same in all nodes. HBase 0.94.15-cdh4.7.1 Command used; # sudo -u hbase hbase hbck Notr getting any error messages.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Zookeeper
-
HDFS
09-17-2015
01:15 AM
Hello Harsh, The issue is that this server is having high load everytime. The configuration seems to be like DN as you said, but the dfsadmin report is not showing this server. The jps is showing as ======= # jps 8014 SecondaryNameNode 22290 Jps ======= SNN process is runnnig. hdfs 8014 7.6 3.8 1427044 149836 ? Sl 2013 90941:12 java -Dproc_secondarynamenode -Xmx1000m -Dhadoop.log.dir=/usr/lib/hadoop-0.20/logs -Dhadoop.log.file=hadoop-hadoop-secondarynamenode ======= How can I confirm this ? Thanks
... View more
09-16-2015
11:57 PM
Hello Guys, We are planning to reboot our secondary name node. Below is our hdfs-site.xml file. Please let me know best step by step procedure to reboot the secondary namenode. Do we have to run "hdfs secondarynamenode -checkpoint " after the reboot or need to check uncheckpointed transactions before reboot. thanks in advance for your help ========= <configuration> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.data.dir</name> <value>/mnt/scecondary/dfs-data</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>0</value> </property><property> <name>fs.checkpoint.period</name> <value>1800</value> </property> </configuration> =================
... View more
Labels:
- Labels:
-
HDFS