Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 922 | 06-04-2025 11:36 PM | |
| 1522 | 03-23-2025 05:23 AM | |
| 751 | 03-17-2025 10:18 AM | |
| 2698 | 03-05-2025 01:34 PM | |
| 1800 | 03-03-2025 01:09 PM |
06-27-2018
10:58 AM
@Saravana V Can you check the ports in the Java code and the Akka configuration match
... View more
06-26-2018
08:19 PM
@Samant Thakur Did you remove the in_use.lock and restart the namenodes? How many journalnodes and zookeepers do have in your cluster?
... View more
06-26-2018
04:20 PM
@Samant Thakur Please shutdown on the journalnodes there seem to be an in_use.lock file and restart and retry the format
... View more
06-24-2018
07:56 PM
@Dassi Jean Fongang Unfortunately there is no FORCE command for decommissioning in Hadoop. Once you have the host in the excludes file and you run the yarn rmadmin -refreshNodes command that should trigger the decommissioning. It isn't recommended and good architecture to have a NameNode and DataNode on the same host (Master and Slave/worker respectively) with over 24 nodes you should have planned 3 to 5 master nodes and strictly have DataNode,NodeManager and eg Zk client on the slave (workernodes). Moving the NameNode to a new node and running the decommissioning will make your work easier and isolate your Master processes from the Slave this is the ONLY solution I see left for you. HTH
... View more
06-23-2018
05:00 PM
@naveen sangam You got a couple of responses to the issue you raised but never gave a feedback. You should realize HCC members go a long way to help and it would not be fair that you just keep quiet, that's not an opensource spirit. Answers members strive to find will also help others who encounter the same issues so in that spirit your feedback is very important. Please don't forget to vote a helpful answer and accept the best answer.
... View more
06-23-2018
04:53 PM
@Mudit Kumar Can you share the error you are encountering, you could be having something different! Could you open a new thread it will get more attention.
... View more
06-23-2018
10:55 AM
1 Kudo
@Victor Sarkar This is the culprit "Could not obtain block: BP-1307428289-10.0.0.4-1528240888625:blk_1073741830_1006 file=/apps/hbase/data/data/hbase/meta/.tabledesc/.tableinfo.0000000001" How did you shutdown the cluster? List the corrupt HDFS blocks: $ hdfs fsck -list-corruptfileblocks Remove corrupt block with below method $ hdfs fsck / -delete Determine problematic files with problems $ hdfs fsck / | egrep -v '^\.+ Once you found the files that are corrupt delete them $ hdfs fsck /path/to/corrupt/file -locations -blocks -files Repeat until all files are healthy or you exhaust all alternatives looking for the blocks. Once you determine what happened and you cannot recover any more blocks, just use the $ hdfs fs -rm /path/to/file/with/permanently/missing/blocks HTH
... View more
06-23-2018
02:28 AM
@Abhinav Joshi Because Minifi isn't deployed using Ambari to keep the foot print small,It's just as simple as downloading, un-taring and updating a config file. These are the only parameters you need to change ! nifi.remote.input.socket.host=
nifi.remote.input.socket.port=
nifi.remote.input.secure=true Please have a look at this HCC support document it will walk you through the setup and HTH
... View more