Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2001 | 06-15-2020 05:23 AM | |
| 16478 | 01-30-2020 08:04 PM | |
| 2149 | 07-07-2019 09:06 PM | |
| 8354 | 01-27-2018 10:17 PM | |
| 4739 | 12-31-2017 10:12 PM |
02-22-2018
11:04 AM
1 Kudo
@Michael Bronson One solution can be as following: # export HOST=amb25101.example.com
# export USER=admin
# export PASSWD=admin
# export ACTION=INSTALLED
# export CLUSTER_NAME=plain_ambari
# echo {"\"RequestInfo\": {\"context\" :\"Stop AMBARI_METRICS via REST\"}, \"Body\": {\"ServiceInfo\": {\"state\": \"$ACTION\"}}}" > /tmp/postData.txt
# curl -u $USER:$PASSWD -i -H 'X-Requested-By: ambari' -X PUT -d@/tmp/postData.txt http://$HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/AMBARI_METRICS . So basically we are doing the following thing differently: # echo {"\"RequestInfo\": {\"context\" :\"Stop AMBARI_METRICS via REST\"}, \"Body\": {\"ServiceInfo\": {\"state\": \"$ACTION\"}}}" > /tmp/postData.txt
# curl -u $USER:$PASSWD -i -H 'X-Requested-By: ambari' -X PUT -d@/tmp/postData.txt http://$HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/AMBARI_METRICS .
... View more
12-04-2018
10:10 AM
1 Kudo
You could make use of option "-s" or "--silent", like in ambari-server upgrade -s
... View more
02-23-2018
04:41 PM
2.6.x docs updated, e.g., https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.3/bk_ambari-upgrade/content/preparing_to_upgrade_ambari_and_hdp.htm
... View more
02-18-2018
10:58 AM
@Michael Bronson What you are seeing in the output of the API call is actually the HTTP Response which ambari has sent to the client. {
"href": "http://localhost:8080/api/v1/clusters/sys15626/requests/128",
"Requests": {
"id": 128,
"status": "Accepted"
}
} Above is a standard API response from ambari when a request is accepted. The only option to get only the ID part is to use the response parsing mechanism provided by OS like grep. (Or using Custom parsing utility / third party APIs) . Example: # cat /tmp/response.txt | grep id | awk '{print $NF}' | cut -d , -f1
... View more
02-19-2018
11:13 AM
@Jay , its working now after I set the file under the root home and not postgres home
... View more
02-06-2018
03:50 PM
3 Kudos
@Michael Bronson, You can use the below API curl -u {ambari-username}:{ambari-password} -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"_PARSE_.START.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"hdp"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' http://{ambari-host}:{ambari-port}/api/v1/clusters/{cluster-name}/services Replace ambari-username, ambari-password, ambari-host, ambari-port, cluster-name with respective values Note: Cluster Name should be replaced in 2 places. One in the url {cluster-name} and the other one in the json which I put as hdp Thanks, Aditya
... View more
02-06-2018
11:17 AM
1 Kudo
@Michael Bronson, Safe mode above applies only for NameNode and not for any other services/components. The above command gives you the safe mode status of 2 nodes (Active NameNode and standby NameNode) You can read more about safe mode from the below links http://hadooptutorial.info/safe-mode-in-hadoop/ http://data-flair.training/forums/topic/what-is-safemode-in-hadoop Thanks, Aditya
... View more
02-06-2018
05:22 PM
It's a memory size for Spark executor (worker). And, there is additional overhead in Spark executor. You need to set a proper value by yourself. Of course, in YARN environment, the memory (+ overhead) should be smaller than the limitation of YARN container. So, Spark shows you the error message. It's an application property. For normal Spark jobs, users are responsible because each app can set their `spark.executor.memory` with `spark-submit`. For Spark Thrift Server, admins should manage that properly when they adjust YARN configuration. For more information, please see this. http://spark.apache.org/docs/latest/configuration.html#application-properties
... View more
02-05-2018
07:36 PM
Refer to the document https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations on how to modify configurations. You have to update the configuration for type=hive-log4j and for the below properties: "hive_log_maxbackupindex": "30", "hive_log_maxfilesize": "256"
... View more