Member since
01-19-2017
3652
Posts
623
Kudos Received
364
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
176 | 12-22-2024 07:33 AM | |
113 | 12-18-2024 12:21 PM | |
442 | 12-17-2024 07:48 AM | |
298 | 08-02-2024 08:15 AM | |
3584 | 04-06-2023 12:49 PM |
06-06-2024
05:41 AM
@rizalt Make a backup of your krb5.conf and modify it like below # Configuration snippets may be placed in this directory as well includedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true ticket_lifetime = 24h dns_lookup_realm = false dns_lookup_kdc = false default_ccache_name = /tmp/krb5cc_%{uid} #default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 #default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5 [realms] HADOOP.COM = { admin_server = master1.hadoop.com kdc = master1.hadoop.com } [domain_realm] .master1.hadoop.com = HADOOP.COM master1.hadoop.com = HADOOP.COM Then restart the KDC and retry
... View more
06-05-2024
12:51 AM
1 Kudo
@rizalt There are a couple of things to validate. Step 1 Pre-requisites Kerberos Server: Ensure you have a Kerberos Key Distribution Center (KDC) and an administrative server set up. DNS: Proper DNS setup is required for both forward and reverse lookups. NTP: Time synchronization across all nodes using Network Time Protocol (NTP). HDP Cluster: A running Hortonworks Data Platform (HDP) cluster. Step 2: Check your /etc/host file ensure your KDC host is assigned the domain HADOOP.COM to match your KDC credentials # hostname -f Step 3: Once that matches then edit the Kerberos configuration file (/etc/krb5.conf) on all nodes to point to your KDC you can scramble the sensitive info and share [libdefaults] default_realm = HADOOP.COM dns_lookup_realm = false dns_lookup_kdc = false [realms] HADOOP.COM = { kdc = kdc.hadoop.com admin_server = admin.hadoop.com } [domain_realm] .hadoop.com = HADOOP.COM hadoop.com = HADOOP.COM Step 4: Locate your kadm5.acl file and ensure it looks like this */admin@HADOOP.COM * Step 5: Restart the KDC and admin servers as root or with sudo # systemctl restart krb5kdc # systemctl restart kadmin Step 6: Check Kerberos Ticket: Ensure that the Kerberos ticket is obtained correctly. kinit -kt /etc/security/keytabs/hdfs.keytab hdfs/hostname@HADOOP.COM klist If your setup is correct you will see an output like below Ticket cache: FILE:/tmp/krb5cc_1000 Default principal: hdfs/hostname@HADOOP.COM Valid starting Expires Service principal 06/05/2024 09:50:21 06/06/2024 09:50:21 krbtgt/HADOOP.COM@HADOOP.COM renew until 06/05/2024 09:50:21 06/05/2024 09:50:22 06/06/2024 09:50:21 HTTP/hostname@HADOOP.COM renew until 06/05/2024 09:50:21 Hope that helps
... View more
01-09-2024
10:05 AM
@achemeleu Welcome Acemeleu, @DianaTorres for pinging me on this one. I provide 2 solutions see threads about a similar case Ambari stuck1 Ambari.stuck2 Can you check the above solution and see if that works out for you too? In case it doesn't please can you share your HDP version, database type/version, ambari-server logs and OS type/version, and the brief background of the steps you executed before getting stuck Please let us know whether that resolved your issue. Geoffrey
... View more
04-15-2023
02:13 PM
@harry_12 Assumption non kerberized sandboox User creation in Ambari Ui should auto create user's home directory. Let try out this recommended approach On your Ambari Server host, backup and edit the ambari-properties file. # cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties_<$date> Edit using vi in this example # vi /etc/ambari-server/conf/ambari.properties For consistency group it alphabetically add the line below as shown see last line ambari.post.user.creation.hook=/var/lib/ambari-server/resources/scripts/post-user-creation-hook.sh #Sat Apr 15 21:49:53 CEST 2023 agent.package.install.task.timeout=1800 agent.stack.retry.on_repo_unavailability=false agent.stack.retry.tries=5 agent.task.timeout=900 agent.threadpool.size.max=25 ambari-server.user=root ambari.python.wrap=ambari-python-wrap ambari.post.user.creation.hook=/var/lib/ambari-server/resources/scripts/post-user-creation-hook.sh Save the new ambari.properties Restart Ambari server. # ambari-server restart Recreate a new user and see if the home dir is auto-created in /user/<New_user> Please let me know if that helped
... View more
04-15-2023
12:37 PM
@harry_12 Can you share the link for the download of the sandbox? I want to try it
... View more
04-14-2023
10:39 AM
@harry_12 Can you share the configs ie Memory /Cores allocated to your Sandbox and share the link for the download I will test that and document my process
... View more
04-14-2023
03:34 AM
@harry_12 Sounds familiar is the first time running VB is virtualization enabled on the host? Or have you simply tried re-installing it? If you are the type who loves to deep dive here is good documentation on Result Code E Fail 0x80004005 I am sure that should help out
... View more
04-09-2023
09:06 AM
@YasBHK File /user/hdfs/data/file.xlsx could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operation. So that means your data node is down can you restart the HDFS service and retry?
... View more
04-09-2023
03:26 AM
@YasBHK Please ensure both data nodes (2) are running. You definitely have an issue with one of the data nodes and because of your replication factor which I guess is 2 from the output the file /user/hdfs/data/file.xlsx can't be persisted if it can't meet the min replication of 2. Firstly understand why the second data node has been excluded by YARN either its space related issue or it just isn't started. Please check the dfs.hosts.exclude location usually in HDP /etc/hadoop/conf/dfs.exclude remove the host in the file and run the below hdfs dfsadmin -refreshNodes or from Ambari Ui just run the refresh nodes That should resolve the issue. Restart the faulty datanode and your HDFS put command will succeed
... View more
04-08-2023
03:21 PM
1 Kudo
@AbuSaiyeda can you do the following and revert if you still get issues Backup the ambari server properties file cp /etc/ambari-server/conf/ambari.properties /etc/ambari-server/conf/ambari.properties.ORIG # Change the timeout of the ambari server echo 'server.startup.web.timeout=120' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.acquisition-size=5' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.max-age=0' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.max-idle-time=14400' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.max-idle-time-excess=0' >> /etc/ambari-server/conf/ambari.properties echo 'server.jdbc.connection-pool.idle-test-interval=7200' >> /etc/ambari-server/conf/ambari.properties Restart Ambari and monitor ,please let me know if you need further help
... View more