Member since
07-30-2019
453
Posts
112
Kudos Received
80
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1444 | 04-12-2023 08:58 PM | |
3423 | 04-04-2023 11:48 PM | |
1008 | 04-02-2023 10:24 PM | |
2638 | 07-05-2019 08:38 AM | |
2713 | 05-13-2019 06:21 AM |
07-29-2020
06:34 AM
@massoudm Did you found a solution ? I get the same error after the upgrade of HDP 3.1.0 to 3.1.4 Thanks in advance
... View more
06-11-2020
01:27 PM
Our installation had the password hash in another table. update ambari.user_authentication set authentication_key='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' where user_id='1' Note: user_id=1 was the admin in my case.
... View more
11-28-2019
04:18 AM
For me the solution to this issue was just reinstalling the ambari-agent. Maybe it was partially installed somehow.
... View more
10-22-2019
07:11 AM
Incase you have already at Step 9 and cannot proceed with ambari-server reset (as it invovles lots of Configs being added again , the below steps are for you ) Preqrequesties : The cluster now is in Deployment step(step 9 ) and you have only retry button to press - Follow the above mentioned steps - Dont restart ambari-server. Instead refresh the Installation wizard page. It will automatically pick the HDP repo urls and click on deploy. - It will go smoothly
... View more
09-17-2019
07:00 AM
Hi do we have something similar for cloudera (custom alerts)? Regads Amjad
... View more
09-13-2019
12:15 PM
Thanks a lot for your answer @jsensharma, using your command makes everything work as expected! I didn't try this before because I was a bit confused about the impact of the --silent flag. I was assuming (wrongly) that using --silent implied to have all the parameters set to their default values.
... View more
08-19-2019
01:25 AM
Ambari-server restart Using python /usr/bin/python Restarting ambari-server Ambari Server is not running Ambari Server running with administrator privileges. Organizing resource files at /var/lib/ambari-server/resources... Ambari database consistency check started... Server PID at: /var/run/ambari-server/ambari-server.pid Server out at: /var/log/ambari-server/ambari-server.out Server log at: /var/log/ambari-server/ambari-server.log Waiting for server start...........................................ERROR: Exiting with exit code -1. REASON: Ambari Server java process has stopped. Please check the logs for more information.
... View more
08-18-2019
09:42 PM
Hi @rushi_ns , yours might be completly different issue. Please create a new Question thread stating your issue.
... View more
08-18-2019
09:32 PM
Hi @khosrucse , Removal of these staging file is part of yarn application execution. But, when the yarn application itself is killed, there is not way to remove these files. The new query on top of the same table also does not have reference to these staging files(so that it can be removed by later runs), as the files are generated by Yarn application which is already killed now. For now, the only option is to manually remove the files. Though, you can refer the below link for a different approach to remove these directories. Please note, this is workaround from an end user, and thus, should be implemented with your environment in mind. https://stackoverflow.com/questions/33844381/hive-overwrite-directory-move-process-as-distcp/35583367#35583367 (for your reference )
... View more
08-18-2019
09:29 PM
Hi @Koffi , yarn.nodemanager.log-dirs is where User Jobs Containers write the stdout, stderr and syslogs. Yarn has log aggregation which copies the log files from NodeManager local directory to HDFS and remove them. In yarn-site.xml look for below property in yarn configs in ambari (the below is from my property) <property> <name>yarn.nodemanager.log-dirs</name> <value>/var/log/hadoop/yarn/log</value> </property> And make sure dedicated disk is allocated. And one more chance of disks getting full is if more parallel containers running on a NodeManager and writing too much of intermediate data. The most common cause of log-dirs are bad is due to available disk space on the node exceeding yarn's max-disk-utilization-per-disk-percentage default value of 90.0%. Either clean up the disk that the unhealthy node is running on, or increase the threshold in yarn-site.xml. if you are more interested, please read : https://blog.cloudera.com/resource-localization-in-yarn-deep-dive/
... View more