Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 867 | 06-04-2025 11:36 PM | |
| 1440 | 03-23-2025 05:23 AM | |
| 720 | 03-17-2025 10:18 AM | |
| 2592 | 03-05-2025 01:34 PM | |
| 1718 | 03-03-2025 01:09 PM |
04-09-2018
02:33 PM
2 Kudos
@Vincent Hu Please do the following if the cluster is managed by Ambari, this should be added in: Ambari > HDFS > Configurations>Advanced core-site > Add Property hadoop.http.staticuser.user=yarn Restart any stale service and retry
... View more
04-09-2018
02:19 PM
@Liana Napalkova I am running a single test node with below setting try to use my setting to validate put read the info below for better settings NameNode Java heap size=3GB
DataNode maximum Java heap size=3GB Here is the official document for calculating your memory setting First download yarn-utils.py check the in on this page above the Table 1.5. download and unzip it in a temporary directory then run e.g python yarn-utils.py -c 16 -m 64 -d 4 -k True Hope that helps
... View more
04-09-2018
01:03 PM
@Liana Napalkova Your problem seem to be memory related, the log clearly indicates it ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824-XX:MaxHeapSize=1073741824-XX:MaxNewSize=134217728-XX:MaxTenuringThreshold=6-XX:NewSize=134217728-XX:OldPLABSize=16-XX:OnOutOfMemory
Error="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"-XX:OnOutOfMemoryError= Can you share your memory setting? Ambari UI--->HDFS---> Configs NameNode/DataNode Java heap sizes
... View more
04-09-2018
11:46 AM
@Dinesh Jadhav The command kadmin.local should be run as root on the KDC server # kadmin.local
Authenticating as principal root/admin@xxxxx with password.
kadmin.local: listprincs You should be able to see the principals
... View more
04-09-2018
08:12 AM
@vivekananda chagam Then you can access the Nifi UI through Ambari--->Nifi--->Nifi UI and the links I attached earlier are good enough to get you started Nifi video Nifi docs Hope that helps
... View more
04-09-2018
07:56 AM
@Dinesh Jadhav Okay lets first eliminate the Kerberos error can you attach your current files below
- krb5.conf - kdc.conf - kadm5.acl Can you also run as root on the kdc server and see if you get any output # kadmin.local Then look at this oozie config for kerberos
... View more
04-08-2018
03:59 PM
@Siddharth Mishra Good to know it's always important to scrutinise the logs
... View more
04-08-2018
09:10 AM
@Anwaar Siddiqui Kerberos KDC listens on both TCP and UDP channel on port 88 (default). By default, the Namenode tries to connect to Kerberos KDC over UDP. How to force the Kerberos library to use TCP: 1. Go to Ambari UI. Then Services > Kerberos > Configs. 2. In the 'Advanced krb5-conf section, look for 'krb5-conf Template' field. Under [libdefaults] stanza, add 'udp_preference_limit = 1' 3. Save config and restart the affected component. 4. This will force Kerberos to use TCP. Can you share the output of # iptables -nvL If you don't see UDP port 88 add the following # iptables -I INPUT 5 -p udp --dport 88 -j ACCEPT Rerun the first command you should now see a line like this 0822 2908K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:88
... View more
04-07-2018
07:22 PM
@vivekananda chagam Can you give some information about your installation? Sandbox or Ambari Managed installation and the version? Here is are links to HDF knowledge learning ropes HDF Tutorials Hope that helps
... View more
04-06-2018
11:06 PM
@Siddharth Mishra This typically indicates a missing filesystem. [Alert][datanode_unmounted_data_dir] Failed with result CRITICAL: ['The following data dir(s) were not found: /hadoop/hdfs/data\n'] Can you check that the above mount point exists !!! see attached screenshot FS.jpg Note: The mount points for DataNode and NameNode if the above File system is NOT mounted update it save and retry. Please revert
... View more