Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 710 | 06-04-2025 11:36 PM | |
| 1281 | 03-23-2025 05:23 AM | |
| 635 | 03-17-2025 10:18 AM | |
| 2327 | 03-05-2025 01:34 PM | |
| 1515 | 03-03-2025 01:09 PM |
03-30-2022
11:38 AM
@iamfromsky The path you are mentioning has permissions issues. As the root user can you # chmod 777 /run/cloudera-scm-agent/process/9392-yarn-NODEMANAGER/creds.localjceks Then retry if that's successful then fine-tune the permissions . Hope that helps
... View more
01-16-2022
09:53 AM
@Koffi This is typical of a rogue process hasn't reslease the Caused by: java.net.BindException: Address already in use You will need to run # kill -9 5356 The restart the NN that should resolve the issue
... View more
12-19-2021
08:51 AM
@Koffi Any updates on the commands ?
... View more
12-19-2021
04:24 AM
@Koffi Yes you obviously cannot run safe mode when the namenodes are down I can see the JN and ZKFC are all up can you run the below command on the last known good Namenode nn01 hopping you are running it as root su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" If nn01 starts without any issue then run the same command on nn02 else share the logs from nn01
... View more
12-17-2021
08:56 PM
@Koffi This issue seems linked to your previous posting. Your last healthy name node was nn01, right? The assumption here is you are logged in as root Instructions to fix that one journal node. 1) Put both nn01 and nn02 in safe mode ( NN HA) $ sudo su - hdfs
[hdfs@host ~]$ hdfs dfsadmin -safemode enter Safe mode is ON in nn01/<nn01_IP>:8020 Safe mode is ON in nn02/<nn02_IP>:8020 2) Save Namespace [hdfs@host ~]$ hdfs dfsadmin -saveNamespace Save namespace successful for nn01/<nn01_IP>:8020 Save namespace successful for nn02/<nn02_IP>:8020 3) Backup zip/tar the journal dir from a working JN node of (nn01) and copy it to the non-working JN's of (nn02)node to something like /hadoop/hdfs/journal/<Cluster_name>/current 4) Leave safe mode [hdfs@host ~]$ hdfs dfsadmin -safemode leave Safe mode is OFF in nn01/<nn01_IP>:8020 Safe mode is OFF in nn02/<nn02_IP>:8020 4) Restart HDFS From Ambari you can now start the nn01 first when it comes up then start nn02 Please let me know.
... View more
12-17-2021
08:45 PM
@Koffi From the Ambari UI are you seeing any HDFS alert? ZKFailover Controller or Journalnodes? If so share the logs?
... View more
10-27-2021
12:08 PM
@Rish How much memory has your VM quickstart have? Can you open the RM and check using the application_id the logs should give you an idea of whats happening Geoffrey
... View more
10-27-2021
11:16 AM
@Koffi There are a couple of things here you first need to resolve too many open files issue by checking the ulimit $ ulimit -n To increase for the current session depending on the above output ulimit -n 102400 Edit /etc/security/limits.conf to make the change permanent. Then restart the kdc and kadmin depending on your Linux version systemctl # /etc/rc.d/init.d/krb5kdc start
# /etc/rc.d/init.d/kadmin start Then restart Atlas from the Ambari UI Please revert after these actions Geoffrey
... View more
10-02-2021
11:11 AM
@Phanikondeti Please can you share how you installed your nifi , version, and install documents followed? The errors logs would be good to share too.
... View more
09-21-2021
06:28 AM
@vciampa The solution is the Replication Manager which enables you to replicate data across data centers for disaster recovery scenarios. Replication Manager replicates HDFS, Hive, and Impala data, and supports Sentry to Ranger replication from CDH (version 5.10 and higher) clusters to CDP Private Cloud Base (version 7.0. 3 and higher) clusters. https://docs.cloudera.com/cdp/latest/data-migration/topics/cdp-data-migration-replication-manager-to-cdp-data-center.html It's easy to use 🙂 Happy Hadooping
... View more