Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 608 | 06-04-2025 11:36 PM | |
| 1166 | 03-23-2025 05:23 AM | |
| 575 | 03-17-2025 10:18 AM | |
| 2172 | 03-05-2025 01:34 PM | |
| 1369 | 03-03-2025 01:09 PM |
01-17-2018
10:19 AM
@Geoffrey Shelton Okot Thank you very much for the information.
... View more
06-24-2018
07:56 PM
@Dassi Jean Fongang Unfortunately there is no FORCE command for decommissioning in Hadoop. Once you have the host in the excludes file and you run the yarn rmadmin -refreshNodes command that should trigger the decommissioning. It isn't recommended and good architecture to have a NameNode and DataNode on the same host (Master and Slave/worker respectively) with over 24 nodes you should have planned 3 to 5 master nodes and strictly have DataNode,NodeManager and eg Zk client on the slave (workernodes). Moving the NameNode to a new node and running the decommissioning will make your work easier and isolate your Master processes from the Slave this is the ONLY solution I see left for you. HTH
... View more
12-20-2017
07:33 PM
1 Kudo
@Michael Bronson, HDFS in this cluster is in safemode. Thats why Timelineserver is failing to start. Kindly check HDFS log to see why is Namenode is in safemode. You can explicitly turn off safemode by running "hdfs dfsadmin -safemode leave".
... View more
04-06-2018
07:30 PM
Resolution Found - Please follow below steps 1. Install Ambari Agent 2. Check the status of ambari agent - Start it if not up 3. Try yum install ambari-server Thank you all for all the help. Thanks, Abhishek
... View more
12-06-2017
10:04 AM
@Michael Bronson Brief of edits_inprogress__start transaction ID– This
is the current edit log in progress. All transactions starting fromare
in this file, and all new incoming transactions will get appended to
this file. HDFS pre-allocates space in this file in 1 MB chunks for
efficiency, and then fills it with incoming transactions. You’ll
probably see this file’s size as a multiple of 1 MB. When HDFS finalizes
the log segment, it truncates the unused portion of the space that
doesn’t contain any transactions, so the finalized file’s space will
shrink down. . More details about these files and it's functionality can be found at: https://hortonworks.com/blog/hdfs-metadata-directories-explained/
... View more
05-22-2018
01:59 PM
i can understand Lukas' issue with "*-2" named .repo files. my install is error'ing out and giving me no clues, no breadcrumbs to follow. all my /var/lib/ambari-agent/data/errors* log files are either size 0-length or 86-length, with latter: "Server considered task failed and automatically aborted it." on centos7.4, Ambari2.6.1.5 when i installed with a ambari-hdp.repo Ambari complained and duplicated it as ambari-hdp-1.repo Justin
... View more
12-05-2017
12:47 AM
@Michael Bronson Can you instead do a # grep namenodes /etc/hadoop/conf/hdfs-site.xml Then get the values of the parameter dfs.ha.namenodes.xxxx Please let me know
... View more
08-27-2018
06:14 PM
@Daniel Muller, can you grep "Safe mode is" from hdfs namenode log? That will tell the reason why namenode does not exit safemode directly.
... View more
11-15-2017
08:28 AM
@Geoffrey Shelton Okot Sorry for the delay. HDP cluster is 2.6.1 and is kerberized. I've also install on it Ranger and Ranger KMS. My Os is Ubuntu 14.04. I've detailly explained my question here: https://community.hortonworks.com/questions/147826/failed-to-access-filesystem-root-through-hue-ui.html
... View more
11-10-2017
04:53 PM
Yeah, I have tried that approach as well. The ODI doc. mentions about using it's weblogic hive jdbc driver but one can use other drivers as well. The question that I have mentioned here is around the standard(Apache)jdbc driver.
... View more