Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2730 | 04-27-2020 03:48 AM | |
| 5288 | 04-26-2020 06:18 PM | |
| 4458 | 04-26-2020 06:05 PM | |
| 3584 | 04-13-2020 08:53 PM | |
| 5386 | 03-31-2020 02:10 AM |
10-09-2017
05:59 AM
@dsun Please try the following: 1). For the Missing Upgrade button issue, Please try the following URL to open ambari in Experimental mode and check the "http://$AMBARI_HOST:8080/#/experimental" and save it in the UI to see if it helps: opsDuringRollingUpgrade = true (Checked) 2). Regarding the DRUID upgrade issue. Please login to the host "scregionm2.field.hortonworks.com" via SSH and then check the following: a). If the Druid package is upgraded? b). Check if all the verison are reflecting "2.6.2.0-205" version, in the output of the hdp-select output. If all installed components are correctly showing "2.6.2.0-205" (except DRUID) then refere to Step c. # hdp-select c). If you have installed DRUID from some third party sources then run the following command to set it's version to 2.6.2.0-205 Syntax: # hdp-select set <PACKAGE_NAME> 2.6.2.0-205 Here PACKAGE_NAME will be druid name. (OR) # hdp-select set all 2.6.2.0-205 .
... View more
10-09-2017
04:12 AM
2 Kudos
@Nik Lam You can try using the following API call to set "enableIpa" to "true" curl -u admin:admin -i -H 'X-Requested-By: ambari' -X POST -d '{"user-pref-admin-supports":"{\"disableHostCheckOnAddHostWizard\":false,\"preUpgradeCheck\":true,\"displayOlderVersions\":false,\"autoRollbackHA\":false,\"alwaysEnableManagedMySQLForHive\":false,\"preKerberizeCheck\":false,\"customizeAgentUserAccount\":false,\"installGanglia\":false,\"opsDuringRollingUpgrade\":false,\"customizedWidgetLayout\":false,\"showPageLoadTime\":false,\"skipComponentStartAfterInstall\":false,\"preInstallChecks\":false,\"serviceAutoStart\":true,\"logSearch\":true,\"redhatSatellite\":false,\"enableIpa\":true,\"addingNewRepository\":false,\"kerberosStackAdvisor\":true,\"logCountVizualization\":false,\"enabledWizardForHostOrderedUpgrade\":true,\"manageJournalNode\":true}"}' http://amb25101.example.com:8080/api/v1/persist . Please replace the "amb25101.example.com" with your Ambari Server Host. If everything is OK then , the output of the Above Curl HTTP Response should be 202 as following: HTTP/1.1 202 Accepted
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=1o05iiupqulu6o52uwtl9cmhz;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain
Content-Length: 0 .
... View more
10-08-2017
03:26 AM
@vishwanath pr If you are using Sandbox and getting this message "No command 'hdfs' found" then it means you might not have done the SSH to Sandbox on port 2222 (which is mandatory for these commands to work). Similar issue reference # ssh root@127.0.0.1 -p 2222
# hdfs dfs -ls /tmp . If you are getting this message in a Non-Sandbox environment then you must make sure that the "HDFS_CLIENT" package or component is installed on that host. You can install the clients on the host from Ambari UI Ambari UI --> Hosts (Tab) --> Click on Desired Hostname --> Click on "Install Clients" .
... View more
10-06-2017
07:25 PM
@Winnie Philip Great!!! to see your issue is fixed. It will be also great if you can mark this HCC thread as Answered by
clicking on the "Accept" Button. That way other HCC users can quickly
find the solution when they encounter the same issue.
... View more
10-06-2017
06:00 PM
@Winnie Philip After fixing the user:group .. is it working Or still failing drwxrwxrwx 4 yarn hadoop 30 Aug 24 14:35 /appl/hadoop/yarn .
... View more
10-06-2017
04:17 PM
@Winnie Philip We see the Warning as : 2017-10-06 10:28:10,119 WARN localizer.ResourceLocalizationService (ResourceLocalizationService.java:checkLocalDir(1445)) - Permissions incorrectly set for dir /usr/local/opt/hadoop/yarn/local/nmPrivate, should be rwx------, actual value = rwx-w----<br> We will need to check the permissions for yarn local directories Which basically depends on the umask value set for the user 'yarn' at system level. What is the umask value set for the yarn user set under the following files: /etc/bashrc
/etc/profile . Above is a WARNING message which can be seen when the ResourceLocalizationService checks the permission of nmPrivate, if it is not 700, it will just log a Warn message and change the permission to 700. But in your case it is not able to change the permission. WARN localizer.ResourceLocalizationService (ResourceLocalizationService.java:initializeLocalDir(1277)) - Could not set permissions for local dir /appl/hadoop/yarn/local/usercache
EPERM: Operation not permitted Please check the Node Managers' file/folder permissions is it is set to "yarn:hadoop" (like /appl/hadoop/yarn/) # ls -ld /appl/hadoop/yarn
# chown yarn:hadoop /appl/hadoop/yarn .
... View more
10-06-2017
04:09 PM
@Karpagalakshmi Rajagopalan Good to know that the issue is resolved and solution worked for you. It will be also great if you can mark this HCC thread as Answered by clicking on the "Accept" Button. That way other HCC users can quickly find the solution when they encounter the same issue.
... View more
10-06-2017
04:04 PM
@David Robison The connection failures result in retrying alternate NameNodes up to a total of 15 times with an exponentially increasing delay of up to 15 seconds. The "dfs.client.failover.max.attempts" property defines the failover attempts (by default set to 15) and the maximum wait time between attempts is "15 seconds by default" which is controlled by the property "dfs.client.failover.sleep.max.millis". dfs.client.failover.sleep.max.millis : This option specifies the maximum value to wait between failovers.
... View more
10-06-2017
03:56 PM
@David Robison What is the value for the following HDFS property "dfs.client.retry.policy.enabled" ? # su - hdfs
# hdfs getconf -confKey dfs.client.retry.policy.enabled
. DFSClient should ignore dfs.client.retry.policy.enabled for HA proxies to ensure fast failover. Otherwise, dfsclient retries the NN which is no longer active and delays the failover.
... View more
10-06-2017
03:38 PM
@raouia "Host Disk Usage alert" Is actually a "Host-Level" alert is triggered if the amount of "disk space used on a host" (Not on DFS) goes above specific thresholds (50% WARNING, 80% CRITICAL default values ). "fields.png" --> Shows the HDFS Usage. (Only the HDFS file system usage , Not the Disk Usage). This image shows that total 3GB of overall HDFS are being utilized so far at the cluster level. "hosts.png" --> Shows the "Disk Usage" of the "Highlighted" Host (which is 88.53% used with total capacity 19.62GB) "alerts.png" --> This alert performs advanced disk checks under Linux. This will first attempt to check the HDP installation directories if they exist. If they do not exist, it will default to checking / . For more details please see [1], So if you see the complete alert message then you will find that it might be showing the complete message as following (Please check the path that it calculates at the end it might be "/usr/hdp" so it can be different from the hosts.png usage) Example: Capacity Used: [84.15%, 17.7 GB], Capacity Total: [21.1 GB], path=/usr/hdp [1] https://github.com/apache/ambari/blob/release-2.5.2/ambari-server/src/main/resources/host_scripts/alert_disk_space.py#L53-L57
... View more