Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2438 | 04-27-2020 03:48 AM | |
4862 | 04-26-2020 06:18 PM | |
3967 | 04-26-2020 06:05 PM | |
3205 | 04-13-2020 08:53 PM | |
4902 | 03-31-2020 02:10 AM |
01-06-2017
09:18 AM
@Baruch AMOUSSOU DJANGBAN Are you looking out for following kind of information's to get via Ambari APIs (Which we see in the Ambari YARN Summary Page) ? .Submitted Apps:
http://erie1.example.com:8080/api/v1/clusters/ErieCluster/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=ACTIVE&fields=metrics/yarn/Queue/root/AppsSubmitted . Pending Apps
http://erie1.example.com:8080/api/v1/clusters/ErieCluster/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=ACTIVE&fields=metrics/yarn/Queue/root/AppsPending .Running Apps
http://erie1.example.com:8080/api/v1/clusters/ErieCluster/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=ACTIVE&fields=metrics/yarn/Queue/root/AppsRunning Here "erie1.example.com" is ambari server hostname. Similarly you can get the Completed/Killed/Failed application informations. .
... View more
01-06-2017
06:50 AM
1 Kudo
@chitrartha sur As the error you attached as part of "files.txt" shows: 500 SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS] Please refer to last point : https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_views_guide/content/_Troubleshooting.html . If your cluster is configured for Kerberos, you cannot use the Local Cluster Configuration option. You must use the Custom Cluster Configuration option and enter the WebHDFS FileSystem URI. For example: webhdfs://namenode:50070 As per your screenshot you are using "Local Cluster". Following link talks about configuring "Custom Cluster Configuration" https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_ambari_views_guide/content/_Cluster_Configuration_Custom.html .
... View more
01-06-2017
06:20 AM
@Priyansh Saxena Is it Embedded or distributed Metrics collector? Did you trying cleaning up Zookeeper state and restarting, sometimes it might happen due to improper shutdown in embedded mode the state gets corrupted. Please find the value of "hbase.tmp.dir", in the AMS configs (default = /var/lib/ambari-metrics-collector/hbase-tmp/) then try the following rm -rf /var/lib/ambari-metrics-collector/hbase-tmp/
OR
mv /var/lib/ambari-metrics-collector/hbase-tmp /Backup_Dir - Also try remove the AMS zookeeper data by backing up and removing the contents of 'hbase.tmp.dir'/zookeeper' and remove any Phoenix spool files from 'hbase.tmp.dir'/phoenix-spool folder - The try restarting AMS. - Still if the issue persist then can you please share the complete stack trace of the error Reference: "Cleaning up Ambari Metrics System Data" https://cwiki.apache.org/confluence/display/AMBARI/Cleaning+up+Ambari+Metrics+System+Data .
... View more
01-05-2017
02:09 PM
@Uvaraj Seerangan By any chance did you try deleting the directory '/data/hadoop/oozie/data/oozie-db' and then tried starting it again? Better to try moving '/data/hadoop/oozie/data/oozie-db' to some other directory and then try starting it,
... View more
01-05-2017
01:13 PM
@chitrartha sur Also can you please share the screenshot of the HIve/File view configuration here. Which version of Ambari are you using?
... View more
01-05-2017
11:20 AM
@Ralph Adekoya By any chance have you edited or modified the PATH variable? Your PATH should include the "/usr/bin" because the "hadoop" and "hdfs" commands are present inside this path. /usr/bin/hadoop
/usr/bin/hdfs - Also please double check if bymistake HDFS_CLIENTS / Services are removed? - My PATH setting (default) is as following: # echo $PATH
/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin .
... View more
01-05-2017
06:04 AM
@ARUN Here the instruction is to disable (exclude) HBase per region metrics to avoid data flooding. That can be done by explicitly adding the following lines to the end of the file: *.source.filter.class=org.apache.hadoop.metrics2.filter.GlobFilter
hbase.*.source.filter.exclude=*Regions* .
... View more
01-04-2017
10:45 AM
@bob human The default ambari DB password should be "bigdata". Did you try the same? As your error clearly indicates DB corruption (need a DB fix). Hence if it is fresh installation then we will suggest you to reinstall the Ambari Server.As while performing the "ambari-server setup" it will freshly setup an Embedded Postgres Database (with all the required configurations). Or else you can choose some external non default Databases as well if you want. http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_ambari_reference_guide/content/ch_amb_ref_using_non_default_databases.html .
... View more
01-04-2017
06:42 AM
@Ali Nazemian You are using 2.4.1 where this can happen when the encrypted password is null/empty. if (result != null) {
password = new String(result);
} else {
LOG.error("Cannot read password for alias = " + aliasStr);
} https://github.com/apache/ambari/blob/release-2.4.1/ambari-server/src/main/java/org/apache/ambari/server/configuration/Configuration.java#L1847-L1850 - So i am suspecting that your Password encryption was not done successfully. So try the encryption again. Or try removing the encryption and then freshly enable the password encryption: (OLD Doc but should work) https://ambari.apache.org/current/installing-hadoop-using-ambari/content/ch02s06s01s02.html - And do not forget to restart ambari server after enabling the encryption.
... View more
01-04-2017
05:16 AM
@Subodh Chettri It indicates an internal server error so can you please check the Zeppelin Logs? If you find any error/warnign there then please share. /var/log/zeppelin/zeppelin-zeppelin-HOSTNAME.log - Ambari might be showing green because the zeppelin port is opened So port checking is fine. However Zeppelin Server's some components might not be started successfully. So checking logs are good. - Also Please make sure that the Zepelin directories has proper permissions.
... View more