Reply
Highlighted
Explorer
Posts: 13
Registered: ‎01-03-2014

HBase logs not getting updated.

[ Edited ]

Hi All,

 

I'm currently using  Cloudera Standard 4.8.0 (#50 built by jenkins on 20131125-1015 git: 48c25adb872f94de22b61868e82700217853b60e) and CDH 4.5.0-1.cdh4.5.0.p0.30.

I'm facing a weird problem of HBase Regionserver & Master logs are not getting updated. By default HBase logs configued to

/var/log/hbase path. Current log file size is around 180 MB and configured maximum is 500 MB. Region server maximum back ups is 10.

 

I tried to restart the hbase cluster using cloudera manager. Still not working. 

 

Please can any one suggest me any configurations missing? why the logs are not getting updated.

If this is not the right forum, kindly can you suggest to which forum should i be asking these.

 

 

Please let me know anything else required.

 

Thanks in advance.

 

Thanks & Regards,

Sandeep B A

 

Posts: 416
Topics: 51
Kudos: 86
Solutions: 49
Registered: ‎06-26-2013

Re: HBase logs not getting updated.

@sandeep_ba Can you clarify that your regionserver and master logs in /var/log/hbase were updating just fine until a certain date (what does "ls -lah /var/log/hbase" return?), and then they stopped updating? I think the critical thing to investigate is "what changed on that date?"  Do all the log files have the same timestamp (eg. last modified date)?  Did you by any chance fill up your local filesytem?  (what does "df -kh /var" report?)

Explorer
Posts: 13
Registered: ‎01-03-2014

Re: HBase logs not getting updated.

Hi,
I did check all the region server logs and found all of them stopped updating on the same date & time.
In the existing logs, apart from few of warnings(connection refused exceptions), nothing much major is updated.
Also, only 6% of the space is used. So any other pointers to check?

Thanks & Regards,
Sandeep B A
Explorer
Posts: 13
Registered: ‎01-03-2014

Re: HBase logs not getting updated.

Please find this warn log.
WARN org.apache.hadoop.hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_-372905061_27] for 452 seconds. Will retry shortly ...
java.net.ConnectException: Call From region_server_host_1 to name_node_host:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor22.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:782)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:729)
at org.apache.hadoop.ipc.Client.call(Client.java:1242)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy16.renewLease(Unknown Source)
at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy16.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:458)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:649)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:510)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:604)
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1291)
at org.apache.hadoop.ipc.Client.call(Client.java:1209)
... 15 more


Also, If i change the log directory to a different location, It's not creating the log file and writing to it.

Any suggestions, how to debug further would be helpful.

Thanks & Regards,
Sandeep B A
Explorer
Posts: 13
Registered: ‎01-03-2014

Re: HBase logs not getting updated.

Hi,

I tried giving different location to hbase region server logs and full permissions to the directory.
But not much use. This is kind of blocker to me :( Unable to proceed further since i don't know what is
happening in the region server logs.

Kindly can anyone help me to figure out how to proceed further.

Thanks & Regards,
Sandeep B A
Explorer
Posts: 13
Registered: ‎01-03-2014

Re: HBase logs not getting updated.

Hi,

While trying on coprocessors, we added few jars,
when i removed those, region server logs started getting updated.
There was a logging jar also which was placed as part of this.
That was causing this issue.

Thanks all for your help, Now the issue is resolved.

Thanks & Regards,
Sandeep B A
Posts: 416
Topics: 51
Kudos: 86
Solutions: 49
Registered: ‎06-26-2013

Re: HBase logs not getting updated.

Oh wow!  Thanks for following up, I was scratching my head on this one as the logging usually just works.  Yes, you've got to be careful when adding custom coprocessors as they can affect HBase in many ways.

Announcements