Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1986 | 06-15-2020 05:23 AM | |
| 16296 | 01-30-2020 08:04 PM | |
| 2136 | 07-07-2019 09:06 PM | |
| 8320 | 01-27-2018 10:17 PM | |
| 4719 | 12-31-2017 10:12 PM |
08-22-2017
06:53 PM
You can get the JSON response.
https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/hosts.md
http://ambari-server:8080/clusters/:clusterName/hosts
To extract the hostnames easier, you could try JSONPath
$.items[*].Hosts.host_name
Or Python with Requests library r = requests.get('...')
hosts = ','.join(x['Hosts']['host_name'] for x in r.json()['items'])
... View more
08-20-2017
09:44 AM
@uri ben-ari Just updated the other thread.
... View more
08-20-2017
06:35 PM
Jay - some of our ambari clusters are old ( they installed before 6-8 month ) , so what is the best approach to alignments the old ambari system ( I mean the parameters with the values ) so they will have the full parameters as our current ambari clusters ?
... View more
08-18-2017
11:06 AM
now I see the req ID , all is right now
... View more
08-18-2017
10:22 AM
1 Kudo
@uri ben-ari One approach will be to use a Shell Script and make an Ambari API call as following and then grep the config types: Example: ("/tmp/get_all_config_types.sh" create a file like following) for CONFIG_TYPE in `curl -s -u admin:admin http://amb25101.example.com:8080/api/v1/clusters/plain_ambari?fields=Clusters/desired_configs | grep '" : {' | grep -v Clusters | grep -v desired_configs | cut -d'"' -f2`; do
echo "Config_type: $CONFIG_TYPE"
done . Replace the following values in the above script based on your environment: "amb25101.example.com" with your Ambari Server Hostname "plain_ambari" with your ambari cluster name 8080 with the port of your ambari server. Output: # chmod 755 /tmp/config_types.sh
# /tmp/config_types.sh
Config_type: admin-log4j
Config_type: admin-properties
Config_type: ams-env
Config_type: ams-grafana-env
Config_type: ams-grafana-ini
Config_type: ams-hbase-env
Config_type: ams-hbase-log4j
Config_type: ams-hbase-policy
Config_type: ams-hbase-security-site
Config_type: ams-hbase-site
Config_type: ams-log4j
Config_type: ams-logsearch-conf
Config_type: ams-site
Config_type: ams-ssl-client
Config_type: ams-ssl-server
Config_type: atlas-tagsync-ssl
Config_type: beeline-log4j2
Config_type: capacity-scheduler
Config_type: cluster-env
Config_type: core-site
Config_type: hadoop-env
Config_type: hadoop-metrics2.properties
Config_type: hadoop-policy
Config_type: hbase-env
Config_type: hbase-log4j
Config_type: hbase-logsearch-conf
Config_type: hbase-policy
Config_type: hbase-site
Config_type: hcat-env
Config_type: hdfs-log4j
Config_type: hdfs-logsearch-conf
Config_type: hdfs-site
Config_type: hive-atlas-application.properties
Config_type: hive-env
Config_type: hive-exec-log4j
Config_type: hive-exec-log4j2
Config_type: hive-interactive-env
Config_type: hive-interactive-site
Config_type: hive-log4j
Config_type: hive-log4j2
Config_type: hive-logsearch-conf
Config_type: hive-site
Config_type: hivemetastore-site
Config_type: hiveserver2-interactive-site
Config_type: hiveserver2-site
Config_type: kafka-broker
Config_type: kafka-env
Config_type: kafka-log4j
Config_type: kafka-logsearch-conf
Config_type: kafka_client_jaas_conf
Config_type: kafka_jaas_conf
Config_type: livy2-conf
Config_type: livy2-env
Config_type: livy2-log4j-properties
Config_type: livy2-spark-blacklist
Config_type: llap-cli-log4j2
Config_type: llap-daemon-log4j
Config_type: mapred-env
Config_type: mapred-logsearch-conf
Config_type: mapred-site
Config_type: pig-env
Config_type: pig-log4j
Config_type: pig-properties
Config_type: ranger-admin-site
Config_type: ranger-env
Config_type: ranger-hbase-audit
Config_type: ranger-hbase-plugin-properties
Config_type: ranger-hbase-policymgr-ssl
Config_type: ranger-hbase-security
Config_type: ranger-hdfs-audit
Config_type: ranger-hdfs-plugin-properties
Config_type: ranger-hdfs-policymgr-ssl
Config_type: ranger-hdfs-security
Config_type: ranger-hive-audit
Config_type: ranger-hive-plugin-properties
Config_type: ranger-hive-policymgr-ssl
Config_type: ranger-hive-security
Config_type: ranger-kafka-audit
Config_type: ranger-kafka-plugin-properties
Config_type: ranger-kafka-policymgr-ssl
Config_type: ranger-kafka-security
Config_type: ranger-logsearch-conf
Config_type: ranger-site
Config_type: ranger-solr-configuration
Config_type: ranger-storm-audit
Config_type: ranger-storm-plugin-properties
Config_type: ranger-storm-policymgr-ssl
Config_type: ranger-storm-security
Config_type: ranger-tagsync-policymgr-ssl
Config_type: ranger-tagsync-site
Config_type: ranger-ugsync-site
Config_type: ranger-yarn-audit
Config_type: ranger-yarn-plugin-properties
Config_type: ranger-yarn-policymgr-ssl
Config_type: ranger-yarn-security
Config_type: slider-client
Config_type: slider-env
Config_type: slider-log4j
Config_type: spark2-defaults
Config_type: spark2-env
Config_type: spark2-hive-site-override
Config_type: spark2-log4j-properties
Config_type: spark2-logsearch-conf
Config_type: spark2-metrics-properties
Config_type: spark2-thrift-fairscheduler
Config_type: spark2-thrift-sparkconf
Config_type: sqoop-atlas-application.properties
Config_type: sqoop-env
Config_type: sqoop-site
Config_type: ssl-client
Config_type: ssl-server
Config_type: storm-atlas-application.properties
Config_type: storm-cluster-log4j
Config_type: storm-env
Config_type: storm-logsearch-conf
Config_type: storm-site
Config_type: storm-worker-log4j
Config_type: tagsync-application-properties
Config_type: tagsync-log4j
Config_type: tez-env
Config_type: tez-interactive-site
Config_type: tez-site
Config_type: usersync-log4j
Config_type: usersync-properties
Config_type: webhcat-env
Config_type: webhcat-log4j
Config_type: webhcat-site
Config_type: yarn-env
Config_type: yarn-log4j
Config_type: yarn-logsearch-conf
Config_type: yarn-site
Config_type: zeppelin-config
Config_type: zeppelin-env
Config_type: zeppelin-log4j-properties
Config_type: zeppelin-logsearch-conf
Config_type: zeppelin-shiro-ini
Config_type: zoo.cfg
Config_type: zookeeper-env
Config_type: zookeeper-log4j
Config_type: zookeeper-logsearch-conf .
... View more
08-17-2017
06:43 PM
every thing is ok now thx
... View more
09-11-2017
12:10 PM
Hi @uri ben-ari, You can find the display-names and the property names in the xml configs like hdfs-site, hadoop-env.xml, core-site.xml, etc. https://github.com/apache/ambari/blob/cc412e66156d5a887a725015537dcb75b0caf986/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml https://github.com/apache/ambari/blob/cc412e66156d5a887a725015537dcb75b0caf986/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml for example: <property>
<name>dfs.datanode.max.transfer.threads</name>
<value>1024</value>
<description>
Specifies the maximum number of threads to use for transferring data in and out of the datanode.
</description>
<display-name>DataNode max data transfer threads</display-name>
<value-attributes>
<type>int</type>
<minimum>0</minimum>
<maximum>48000</maximum>
</value-attributes>
<on-ambari-upgrade add="true"/>
</property>
If you can't find something under the common-services then go the appropriate stack directory: ambari-server/src/main/resources/stacks/<STACK>/<STACK_VERSION>/services/<SERVICE_NAME>/configuration
... View more
08-15-2017
10:08 PM
These attributes are available in the blueprint.json file: DataNode failed disk tolerance - Described as dfs.datanode.failed.volumes.tolerated DataNode maximum Java heap size - Described as dtnode_heapsize DataNode max data transfer threads - Described as dfs.datanode.max.transfer.threads I retrieved the full blueprint by using the following cURL statement: curl -H "X-Requested-By: ambari" -X GET -u admin:admin <URL>:8080/api/v1/clusters/<CLUSTER_NAME>?format=blueprint > out.blueprint
... View more
08-17-2017
07:10 PM
Could you give me an example? It sounds like you just want to be able to read the values for a property inside of the JSON file - could you explain why specifically you need it in a CSV format? For nested data like this it is usually better to have it in a JSON format.
... View more
08-17-2017
06:01 PM
Example: If i want to add a parameter "hadoop.security.auth_to_local" to some config ... then i will need to know about it. I found in the following link that it can be applied to "core-site" , So i will need to add it to that config in ambari. https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/core-default.xml
... View more
- « Previous
- Next »