Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2724 | 04-27-2020 03:48 AM | |
| 5283 | 04-26-2020 06:18 PM | |
| 4448 | 04-26-2020 06:05 PM | |
| 3576 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
06-22-2018
05:58 AM
@Anurag Mishra Ambari does not maintain this information. Which means thewre is no database/table/column where we can see the information about last logged in Time for a user. Hence there is no API available for the same. However you can parse the "ambari-audit.log" to extract similar kind of info # grep 'User login' /var/log/ambari-server/ambari-audit.log .
... View more
06-21-2018
11:13 PM
1 Kudo
@Utkarsh Jadhav If you are keep getting this "NameNode Last Checkpoint" alert then it will be good to check if the NameNode is healthy? Like do you see long GC pause messages in the NameNode logs or any other warning? Was there a heavy load on the system when the scheduled checkpoint was supposed to be happening? Normally ambari reports alerts when the underlying system is not healthy so better to check the namenode logs and heap usages/ GC Pauses ..etc What is the heap size for your namenode is is according to the calculation listed here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_command-line-installation/content/configuring-namenode-heap-size.html NameNode heap size depends on many factors, such as the number of files, the number of
blocks, and the load on the system. . Do you see any warning / error while running the following commands manually on your own? # su - hdfs
# hdfs dfsadmin -safemode enter
# hdfs dfsadmin -saveNamespace
# hdfs dfsadmin -safemode leave . The above commands we can run as cron job , however it it better to check if you are getting any warning /error while running this command manually. You can also try reducing the "dfs.namenode.checkpoint.txns" value to little lower value like 100000 and then check if it helps in fixing the alerts. The Secondary NameNode or CheckpointNode will create a checkpoint of the namespace every 'dfs.namenode.checkpoint.txns' transactions, regardless of whether 'dfs.namenode.checkpoint.period' has expired. However such tuning depends on your usecase: https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
... View more
06-20-2018
01:22 AM
@Mahender S This post looks like a Duplicate of : https://community.hortonworks.com/questions/198223/how-to-address-ambari-alert-namenode-heap-usage-da.html?childToView=198230#answer-198230 Can we close one thread and continue of the existing HCC thread.
... View more
06-17-2018
04:12 AM
@Alex Witte You should be using Python 2.7.x instead of using Python3. Please find the Python certified version with HDP : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_support-matrices/content/ch_matrices-ambari.html#ambari_software . hdp-select is a binary that comes with HDP installation and this uses "print" python function which will require slight change in python3. Thats why we are getting: SyntaxError:Missing parentheses in call to 'print'. . Can you please try setting the "HDP_VERSION" in spark-env.sh and then try again.
... View more
06-16-2018
12:05 AM
1 Kudo
@Michael Bronson As the AMS collector seems to be running it should collect and show the data however we will need to check if AMS collector has the data or not? So in order to check that we will need to capture the following informations: 1. Need to make an API call to the AMS collector in order to list down the number & names of metrics that it is collecting? and from which hosts. The following URLs will give us JSON output . Can you please share the same here. http://<ams-host>:6188/ws/v1/timeline/metrics/metadata http://<ams-host>:6188/ws/v1/timeline/metrics/hosts 2. Need to check the Binaries of ambari collector and sink (so that we can see if it is same as ambari ersion) # rpm -qa | grep ambari 3. Can you please check if the metrics-monitor logs shows any kind of connectivity error while connecting to AMS collector? # /var/log/ambari-metrics-monitor/ambari-metrics-monitor.out
4. Do we see if the hostname resolution if file and correctly defined inside the "/etc/ambari-metrics-monitor/conf/metric_monitor.ini" file
5. Please check if the AMS collector log shows any error?
... View more
06-13-2018
09:44 AM
@Manish Roy If oyu have enough system resources like RAM/CPUs/DiskSpace then there should be no issues you can install Nifi & Kafka on same host as well. For example you can check the HDF Sandbox where both are installed on the same single host. https://hortonworks.com/tutorial/getting-started-with-hdf-sandbox/ Additionally], the following two articles can help in tuning Nifi + Kafka https://community.hortonworks.com/articles/80813/kafka-best-practices-1.html https://community.hortonworks.com/articles/7882/hdfnifi-best-practices-for-setting-up-a-high-perfo.html
... View more
06-13-2018
09:33 AM
@panitta suksaran As we see the error, Where it looks like you have not setup MySQL server for Hive properly. Please see: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-administration/content/using_hive_with_mysql.html raise Fail("Failed to download file from {0} due to HTTP error: {1}".format(self.url, str(ex)))
resource_management.core.exceptions.Fail: Failed to download file from http://xxxxxx:8080/resources//mysql-connector-java.jar due to HTTP error: HTTP Error 404 So please make sure that you have already done the following steps? On the Ambari Server Host. # yum install mysql-connection-java -y (OR) if you are downloading the mysql-connector-java JAR from some tar.gz archive then please make sure to check the following locations and create the symlinks something like following to point to your jar. . Then you should find some symlink as following: Example: # ls -l /usr/share/java/mysql-connector-java.jar
lrwxrwxrwx 1 root root 31 Apr 19 2017 /usr/share/java/mysql-connector-java.jar -> mysql-connector-java-5.1.17.jar https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-administration/content/using_hive_with_mysql.html So now ambari knows how to find this jar. The JAR can be found here after # ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
# ls -l /var/lib/ambari-server/resources/mysql-connector-java.jar
-rw-r--r-- 1 root root 819803 Sep 28 19:52 /var/lib/ambari-server/resources/mysql-connector-java.jar Reference HCC Thread: https://community.hortonworks.com/questions/186090/mysql-connector-javajar-due-to-http-error-http-err.html .
... View more
06-13-2018
06:54 AM
@bharat sharma As we see the following error which indicates that you have not placed the hadoop-aws jars in the classpath: py4j.protocol.Py4JJavaError: An error occurred while calling o32.load.: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found<br> . So can you please check and download the aws sdk for java https://aws.amazon.com/sdk-for-java/ Uploaded it to the hadoop directory. please check your "spark.driver.extraClassPath" if it has the "hadoop-aws*.jar" and "aws-java-sdk*.jar" For more details please refer to : https://community.hortonworks.com/articles/25523/hdp-240-and-spark-160-connecting-to-aws-s3-buckets.html https://community.hortonworks.com/articles/36339/spark-s3a-filesystem-client-from-hdp-to-access-s3.html
... View more
06-12-2018
10:29 PM
@Jakub Igla While setting up knox for ambari we must define both the following roled inside the knos topology: <service>
<role>AMBARI</role>
<url>http://$AMBARI_HOST:8080</url>
</service>
<service>
<role>AMBARIUI</role>
<url>http://$AMBARI_HOST:8080</url>
</service>
. After making the above changes please try to restart the knox and then try again.
... View more
06-12-2018
10:15 AM
@Basil Paul Looks like you might have Enabled firewall on the NameNode host "env40-node1" Can you please check if the firewall (iptables) is stopped on the NameNode host or not? # service iptables stop (CentOS 6)
# systemctl stop firewalld (CentOS 7) Once you have confirmed that the firewall (iptables) is not running on NameNode host then from the "" try to connect to NameNode using on port 8020 using netcat or telnet to verify the connectivity. Or the host name resolution. [root@env7-head-proxy4 ~]# nc -v 192.168.150.95 8020
(OR)
[root@env7-head-proxy4 ~]# telnet 192.168.150.95 8020 . Also please check if the env7 host is able to resolve the hostname of NameNode or not?
... View more