Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2725 | 04-27-2020 03:48 AM | |
| 5285 | 04-26-2020 06:18 PM | |
| 4450 | 04-26-2020 06:05 PM | |
| 3576 | 04-13-2020 08:53 PM | |
| 5380 | 03-31-2020 02:10 AM |
08-18-2017
10:49 AM
@Kishore Kumar Please try creating a new Instance of File View and then test again. (Login to ambari UI as ambari admin and then following the below path) Ambari UI --> admin (DropDown)-->ManageAmbari-->Views--> FILES -->Create Instance(Button)<br> Specify the "Instance Name*, Display Name* and Description*" and then save it .. later try accessing that view to see if that works.
... View more
08-18-2017
10:32 AM
@uri ben-ari Following API call will check the services that are down and will start them in the correct order if any of the service is down. # curl -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"_PARSE_.START.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"plain_ambari"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/services? Please replace the following: - "amb25101.example.com" with your Ambari Server Hostname - "plain_ambari" with your ambari cluster name - "8080" with the port of your ambari server. . Example Output: # curl -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"_PARSE_.START.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"plain_ambari"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/services?
{
"href" : "http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/requests/134",
"Requests" : {
"id" : 134,
"status" : "Accepted"
}
} . Later the progress can be tracked by looking at the API response (requestID) http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/requests/134 .
... View more
08-18-2017
10:22 AM
1 Kudo
@uri ben-ari One approach will be to use a Shell Script and make an Ambari API call as following and then grep the config types: Example: ("/tmp/get_all_config_types.sh" create a file like following) for CONFIG_TYPE in `curl -s -u admin:admin http://amb25101.example.com:8080/api/v1/clusters/plain_ambari?fields=Clusters/desired_configs | grep '" : {' | grep -v Clusters | grep -v desired_configs | cut -d'"' -f2`; do
echo "Config_type: $CONFIG_TYPE"
done . Replace the following values in the above script based on your environment: "amb25101.example.com" with your Ambari Server Hostname "plain_ambari" with your ambari cluster name 8080 with the port of your ambari server. Output: # chmod 755 /tmp/config_types.sh
# /tmp/config_types.sh
Config_type: admin-log4j
Config_type: admin-properties
Config_type: ams-env
Config_type: ams-grafana-env
Config_type: ams-grafana-ini
Config_type: ams-hbase-env
Config_type: ams-hbase-log4j
Config_type: ams-hbase-policy
Config_type: ams-hbase-security-site
Config_type: ams-hbase-site
Config_type: ams-log4j
Config_type: ams-logsearch-conf
Config_type: ams-site
Config_type: ams-ssl-client
Config_type: ams-ssl-server
Config_type: atlas-tagsync-ssl
Config_type: beeline-log4j2
Config_type: capacity-scheduler
Config_type: cluster-env
Config_type: core-site
Config_type: hadoop-env
Config_type: hadoop-metrics2.properties
Config_type: hadoop-policy
Config_type: hbase-env
Config_type: hbase-log4j
Config_type: hbase-logsearch-conf
Config_type: hbase-policy
Config_type: hbase-site
Config_type: hcat-env
Config_type: hdfs-log4j
Config_type: hdfs-logsearch-conf
Config_type: hdfs-site
Config_type: hive-atlas-application.properties
Config_type: hive-env
Config_type: hive-exec-log4j
Config_type: hive-exec-log4j2
Config_type: hive-interactive-env
Config_type: hive-interactive-site
Config_type: hive-log4j
Config_type: hive-log4j2
Config_type: hive-logsearch-conf
Config_type: hive-site
Config_type: hivemetastore-site
Config_type: hiveserver2-interactive-site
Config_type: hiveserver2-site
Config_type: kafka-broker
Config_type: kafka-env
Config_type: kafka-log4j
Config_type: kafka-logsearch-conf
Config_type: kafka_client_jaas_conf
Config_type: kafka_jaas_conf
Config_type: livy2-conf
Config_type: livy2-env
Config_type: livy2-log4j-properties
Config_type: livy2-spark-blacklist
Config_type: llap-cli-log4j2
Config_type: llap-daemon-log4j
Config_type: mapred-env
Config_type: mapred-logsearch-conf
Config_type: mapred-site
Config_type: pig-env
Config_type: pig-log4j
Config_type: pig-properties
Config_type: ranger-admin-site
Config_type: ranger-env
Config_type: ranger-hbase-audit
Config_type: ranger-hbase-plugin-properties
Config_type: ranger-hbase-policymgr-ssl
Config_type: ranger-hbase-security
Config_type: ranger-hdfs-audit
Config_type: ranger-hdfs-plugin-properties
Config_type: ranger-hdfs-policymgr-ssl
Config_type: ranger-hdfs-security
Config_type: ranger-hive-audit
Config_type: ranger-hive-plugin-properties
Config_type: ranger-hive-policymgr-ssl
Config_type: ranger-hive-security
Config_type: ranger-kafka-audit
Config_type: ranger-kafka-plugin-properties
Config_type: ranger-kafka-policymgr-ssl
Config_type: ranger-kafka-security
Config_type: ranger-logsearch-conf
Config_type: ranger-site
Config_type: ranger-solr-configuration
Config_type: ranger-storm-audit
Config_type: ranger-storm-plugin-properties
Config_type: ranger-storm-policymgr-ssl
Config_type: ranger-storm-security
Config_type: ranger-tagsync-policymgr-ssl
Config_type: ranger-tagsync-site
Config_type: ranger-ugsync-site
Config_type: ranger-yarn-audit
Config_type: ranger-yarn-plugin-properties
Config_type: ranger-yarn-policymgr-ssl
Config_type: ranger-yarn-security
Config_type: slider-client
Config_type: slider-env
Config_type: slider-log4j
Config_type: spark2-defaults
Config_type: spark2-env
Config_type: spark2-hive-site-override
Config_type: spark2-log4j-properties
Config_type: spark2-logsearch-conf
Config_type: spark2-metrics-properties
Config_type: spark2-thrift-fairscheduler
Config_type: spark2-thrift-sparkconf
Config_type: sqoop-atlas-application.properties
Config_type: sqoop-env
Config_type: sqoop-site
Config_type: ssl-client
Config_type: ssl-server
Config_type: storm-atlas-application.properties
Config_type: storm-cluster-log4j
Config_type: storm-env
Config_type: storm-logsearch-conf
Config_type: storm-site
Config_type: storm-worker-log4j
Config_type: tagsync-application-properties
Config_type: tagsync-log4j
Config_type: tez-env
Config_type: tez-interactive-site
Config_type: tez-site
Config_type: usersync-log4j
Config_type: usersync-properties
Config_type: webhcat-env
Config_type: webhcat-log4j
Config_type: webhcat-site
Config_type: yarn-env
Config_type: yarn-log4j
Config_type: yarn-logsearch-conf
Config_type: yarn-site
Config_type: zeppelin-config
Config_type: zeppelin-env
Config_type: zeppelin-log4j-properties
Config_type: zeppelin-logsearch-conf
Config_type: zeppelin-shiro-ini
Config_type: zoo.cfg
Config_type: zookeeper-env
Config_type: zookeeper-log4j
Config_type: zookeeper-logsearch-conf .
... View more
08-18-2017
10:07 AM
@Kishore Kumar If it still does not work then try the following : 1. Remove the following directory (File View "FILES{1.0.0}" on ambari server. (It will be recreated on next ambari restart) # rm -rf /var/lib/ambari-server/resources/views/work/FILES\{1.0.0\}/ . 2. Restart ambari server # ambari-server restart . 3. Try accessing the view again.
... View more
08-18-2017
10:01 AM
@Kishore Kumar "root" need to be replaced with the user who is running the ambari server process. For example if you are running ambari server as user "abcd" then the property needs to be set in the Ambari UI-->HDFS --> Configs--> Advanced hadoop.proxyuser.abcd.groups=*
hadoop.proxyuser.abcd.hosts=* . - Regarding your query: "the node where my Ambari sever is running is not a part of cluster . Will that lead this issue ?" >>> This is nornal scenario, where Ambari is not part of cluster. So it should not cause issue. File View uses "webhdfs" APIs to query hdfs. . Is your environment Kerberized? If yes, then you should refer to : https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-views/content/Troubleshooting.html - Have you tried creating a new File View instance as mentioned in the previous comment ?
... View more
08-18-2017
09:42 AM
@Kishore Kumar The following error does not looks good: 18 Aug 2017 06:03:58,495 ERROR [ambari-client-thread-28] ContainerResponse:419 - The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
org.apache.ambari.server.view.IllegalClusterException: Failed to get cluster information associated with this view instance . I will suggest you to try creating a new Instance of File View and then test again. Ambari UI --> admin (Drop Down) --> Manage Ambari --> Views --> FILES --> Create Instance (Button) Specify the "Instance Name*, Display Name* and Description*" and then try accessing that view to see if that works. . .
... View more
08-18-2017
05:55 AM
@Kishore Kumar Have you tried restarting ambari once and then tried accessing the view again? If not then please try restarting ambari once. - If it still does not fix the issue then please provide us the following info: 1. Which version of ambari is it? 2. Is this the default File View instance or you created one. If custom File View instance then please share the details of the view configuration. 3. By any chance do you notice any additional WARN / ERROR in ambari server log? 4. Your HDFS service checks are running fine? .
... View more
08-17-2017
06:32 PM
@uri ben-ari Please check the progress of the request ID ... to see if it is completed or stuck:? Example: curl -u admin:admin -H "X-Requested-By: ambari" -X GET http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/requests/132 .
... View more
08-17-2017
06:28 PM
1 Kudo
@uri ben-ari Please pardon me 😞 The URL should end with "requests" instead of "request" . Example: # curl -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"RequestInfo":{"command":"RESTART","context":"Restart all required services","operation_level":"host_component"},"Requests/resource_filters":[{"hosts_predicate":"HostRoles/stale_configs=true"}]}' http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/requests
{
"href" : "http://amb25101.example.com:8080/api/v1/clusters/plain_ambari/requests/132",
"Requests" : {
"id" : 132,
"status" : "Accepted"
}
} .
... View more
08-17-2017
06:09 PM
@uri ben-ari The "Restart all required services" feature is available from ambari 2.5 onwards only. If you are getting 404 in Ambari 2.5 then you might be doing something wrong in the URL. In my case i missed "t" at the end of th URL. The "request" became "reques" Please correct it as following: # curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"RequestInfo":{"command":"RESTART","context":"Restart all required services","operation_level":"host_component"},"Requests/resource_filters":[{"hosts_predicate":"HostRoles/stale_configs=true"}]}' http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER_NAME/request .
... View more