Member since
11-08-2018
96
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6050 | 09-12-2019 10:04 AM | |
5670 | 02-12-2019 06:56 AM |
02-12-2021
06:41 AM
Hi Team, Facing authentication error while downloading parcels from archives.cloudera.com site and tried to generate credentials but facing issues. Can someone please suggest me what way i can solve this issue. Best Regards, Vinod
... View more
Labels:
12-08-2020
06:36 AM
Thank you so much and it is resolved. But why it is not replicating ? When it is keeping under /oldWALs directory it should replicate right ? Can you please give me clarification. Best Regards, Vinod
... View more
12-08-2020
05:29 AM
Hello @smdas Thank you so much for your response. Followed above steps and find below details, ls /hbase/replication [peers, rs] ls /hbase/replication/peers [] Now i have deleted replication in ZNODE and added hbase.replication as false in hbase-site.xml and restarted HBase. Find below details after restart, ls /hbase/replication [peers, rs] ls /hbase/replication/peers [] And now i can see it is cleared and empty under /hbase/oldWALs directory in hdfs. But if you can observe attached screen shot, That is not enabled right? Any differences? Regards, Vinod
... View more
12-07-2020
10:59 PM
Can someone please help me? That would be great and thanks in advance...!!
... View more
12-07-2020
03:34 AM
Hello Team, I am facing issue in my cloudera cluster and HDFS used space is growing and "/hbase/oldWALs" is occupied more than 50%. I can confirm that, HBase replication is disabled and TTL is set to 1 Min. hbase master logcleaner ttl = 1 Min hbase replication = false HBase logs i can see below warn's, WARN org apache hadoop hbase master cleaner CleanerChore: A file cleanerhostname 60000.oldLogCleaner is stopped, won't delete any more files in /nameservice1/hbase/oldWALs And checked list of peers in hbase, hbase(main):001:0> list_peers PEER_ID CLUSTER_KEY STATE TABLE_CFS 0 row(s) in 0.2360 seconds I dont see anything, Please help me with your comments. Thanks & Regards, Vinod
... View more
Labels:
- Labels:
-
Apache HBase
11-20-2020
04:33 AM
Hello Team, Can anyone please help me with your comments. Thanks, Vinod
... View more
10-29-2020
10:09 PM
Hello @Shelton I have added above property in yarn-site.xml for nodemanager and restarted, Still i see same issue and same logs in nodemanager and it is in unknown state, WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The Auxilurary Service named 'mapreduce_shuffle' in the configuration is for class class org.apache.hadoop.mapred.ShuffleHandler which has a name of 'httpshuffle'. Because these are not the same tools trying to send ServiceData and read Service Meta Data may have issues unless the refer to the name in the config Please give me your valuable response. Thanks, Vinod
... View more
10-07-2020
09:50 PM
Hello @Shelton Any suggestions ? My only doubt is, As other nodemanagers are running fine in the cluster why these newly added nodemanagers were going to unknown state? Regards, Vinod
... View more
10-05-2020
07:41 AM
Hello @Shelton Yes we are using YARN and other Nodemanagers are up and running with same configurations but for newly added nodes we are facing above logs and they are going to unknown state. Yes i verified the above parameter but i dont see the above parameter but in yarn configurations page, I can see "Enable Shuffle Auxiliary Service" was enabled. Please suggest me your comments. Thanks, Vinod
... View more