Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2001 | 06-15-2020 05:23 AM | |
| 16478 | 01-30-2020 08:04 PM | |
| 2149 | 07-07-2019 09:06 PM | |
| 8352 | 01-27-2018 10:17 PM | |
| 4739 | 12-31-2017 10:12 PM |
12-04-2018
02:03 PM
2018-12-04 06:01:04,253 - Repository[None] {'action': ['create']}
2018-12-04 06:01:04,254 - File['/tmp/tmp941BIk'] {'content': '[HDP-3.0-repo-101]\nname=HDP-3.0-repo-101\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-101]\nname=HDP-UTILS-1.1.0.22-repo-101\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-12-04 06:01:04,254 - Writing File['/tmp/tmp941BIk'] because contents don't match
2018-12-04 06:01:04,254 - File['/tmp/tmpq8_Itc'] {'content': StaticFile('/etc/yum.repos.d/ambari-hdp-101.repo')}
2018-12-04 06:01:04,255 - Writing File['/tmp/tmpq8_Itc'] because contents don't match
2018-12-04 06:01:04,255 - Rewriting /etc/yum.repos.d/ambari-hdp-101.repo since it has changed.
2018-12-04 06:01:04,255 - File['/etc/yum.repos.d/ambari-hdp-101.repo'] {'content': StaticFile('/tmp/tmp941BIk')}
2018-12-04 06:01:04,256 - Writing File['/etc/yum.repos.d/ambari-hdp-101.repo'] because contents don't match everytime the repo is overriden/generating new file due to which i cannot use my proxy to access public repository. i have already set proxy and port in ambari environment. pelase advise how to get rid of this to proceed with installation. Thank you in Advance. , Hi , i am using proxy to access HDP repo, but due to below issue , ambari is using new repo everytime due to which installation is failing . 018-12-04 06:01:04,255 - Writing File['/tmp/tmpq8_Itc'] because contents don't match
2018-12-04 06:01:04,255 - Rewriting /etc/yum.repos.d/ambari-hdp-101.repo since it has changed. I have already set proxyname and port in ambari environment . please advise , how to instruct ambari to use existing repo without generating/overriding everytime. Thank you,
... View more
01-13-2018
11:30 PM
2 Kudos
@Michael Bronson What ever version of Ambari and Hadoop you see in the Hortonworks Documentation are certified (however some of them are declared as End Of Support as they are very old). However if you want to see the Support Policy and the End Of Life Information about various products then please refer to: https://hortonworks.com/agreements/support-services-policy/ Example HDP Current Support Matrix If you are planning to setup a new cluster then we will suggest you to go with the latest version of Ambari (2.6.1) https://docs.hortonworks.com/HDPDocuments/Ambari/Ambari-2.6.1.0/index.html HDP 2.6.4 (Or 2.6.3). https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/index.html .
... View more
01-14-2018
03:13 PM
@Jay about HDP 2.6.4 , can we also download this version by curl?
... View more
01-10-2018
02:28 PM
well done Jay you are the best
... View more
01-08-2018
09:26 PM
@Michael Bronson For "Metrics Monitor" status you can alter the API call as following: # curl -i -H "X-Requested-By: ambari" -u admin:admin -X GET http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/services/AMBARI_METRICS/components/METRICS_MONITOR?fields=host_components/HostRoles/host_name,host_components/HostRoles/state | grep -A 1 host_name For Yarn Resources like "NODEMANAGER" you can do it like: (Same logic you can apply for RESOURCEMANAGER, APP_TIMELINE_SERVER) # curl -i -H "X-Requested-By: ambari" -u admin:admin -X GET http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/services/YARN/components/NODEMANAGER?fields=host_components/HostRoles/host_name,host_components/HostRoles/state | grep -A 1 host_name ..
... View more
01-04-2018
06:31 PM
1 Kudo
@Michael Bronson, You can use these curl calls to run all the service checks and check the status To run service checks curl -ivk -H "X-Requested-By: ambari" -u {ambari-username}:{ambari-password} -X POST -d @payload.txt http://{ambari-server}:{ambari-port}/api/v1/clusters/{cluster-name}/request_schedules
Sample response:
{
"resources": [
{
"href": "http://<ambari-server>:8080/api/v1/clusters/<clustername>/request_schedules/68",
"RequestSchedule": {
"id": 68 // This is the request-schedule-id to be used for second call
}
}
]
}
<br> Note: Download the attached payload.txt to some folder and run the above command from the same folder. To get status of service checks curl -ivk -H "X-Requested-By: ambari" -u {ambari-username}:{ambari-password} -X GET http://{ambari-server}:{ambari-port}/api/v1/clusters/{cluster-name}/request_schedules/{request-schedule-id} To get the status of each service, iterate through batch_requests array in the response and look for 'request_status' inside each object. COMPLETED is for passed, FAILED for failed, ABORTED if service check is aborted.payload.txt Note: request-schedule-id for the second curl call is obtained from the response of 1st call. Thanks, Aditya
... View more
01-04-2018
03:32 AM
1 Kudo
@Michael Bronson, Yes. You can use the second way to achieve your task. You can also use the below to check if namenode is in SafeMode and leave conditionally. su - hdfs -c "hdfs dfsadmin -safemode get" | grep ON
if [ $? -ne 0 ]
then
su - hdfs -c "hdfs dfsadmin -safemode leave"
fi To run the above script, put the content in a file say xyz.sh chmod +x xyz.sh
./xyz.sh Thanks, Aditya
... View more
01-03-2018
09:55 PM
2 Kudos
@Michael Bronson curl -u {ambari_username}:{ambari_password} -H "X-Requested-By:ambari" -i GET http://localhost:8080/api/v1/clusters/cl1/components?fields=ServiceComponentInfo/state
... View more
05-16-2018
06:48 PM
This can happen if spark1 and spark2 are both running on same node.Try to kill the process. Then delete the service and add it to a separate node.It must work.
... View more
12-21-2018
10:07 AM
Thanks for quick reply. I meant to call a script to shutdown the ambari components after the server is issued a shutdown command, but before it actually shutdowns! But I found a solution to the issue anyway - I just needed to add this command "ExecStop=" to the systemd service files and all seems to work fine now. Thanks again for your quick reply..
... View more