Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2448 | 04-27-2020 03:48 AM | |
4885 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3220 | 04-13-2020 08:53 PM | |
4926 | 03-31-2020 02:10 AM |
06-08-2019
10:33 PM
@Nani Bigdata The Spark2 History server allows us to review Spark application metrics after the application has completed. Without the History Server, the only way to obtain performance metrics is through the Spark UI while the application is running. Regarding the alert "SPARK2_JOBHISTORYSERVER_PROCESS" it is basically a host-level alert which is triggered if the Spark2 History Server cannot be determined to be up. Basically this alert checks the "spark2-defaults/spark.history.ui.port" port accessibility to determine if the Spark2 Job History server is UP and Running or not ? If not then this alert will be triggered.
... View more
06-07-2019
11:29 PM
@Adil BAKKOURI As we see this error: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; So please verify these things first: (Also better to share those outputs here) 1. Login to your NameNode Host "node4.rh.bigdata.cluster" and verify if the port 8020 is listening when you are trying to access that port from other host? # netstat -tnlpa | grep 8020
# hostname -f
# cat /etc/hosts
# ifconfig *NOTE:* in the above netstat command output you do not see port 8020 is listening then you must have to check and share the NameNode logs "/var/log/hadoop/hdfs/hadoop-hdfs-namenode-*.log" to verify if it is showing any error ? Try to run the following command from Both the NameNodes one by one to see if both are returning correct results? # /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get
#### AND from other NameNode also
# /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://$OTHER_NN_HOSTNAME:8020 -safemode get
3. Now login to the other host "node4.rh.bigdata.cluster" (172.16.138.113) Which seems to have the same Hostname. (OR if you see any other hostname in your log then login to that host) Which is trying to connect to NameNode and then verify if you are able to connect to NN host on port 8020? Verify of the NN host name is resolvable ? # telnet node4.rh.bigdata.cluster 8020
(OR)
# nc -v node4.rh.bigdata.cluster 8020
# cat /etc/hosts
# nslookup node4.rh.bigdata.cluster
# nslookup 172.16.138.113 .
... View more
06-07-2019
05:54 AM
@Aishwarya Dixit By Any chance do you have the Ranger and Ambari Server installed on the same host? Because by chance iif the Ranger and Ambari are installed on the same host then Ambari might be redirecting the URLs of components like Ranger here from HTTP to HTTPS. What strict-transport-security does? When using SSL, this will be used to set the Strict-Transport-Security response header. HTTP Strict Transport Security (HSTS) is a security policy which is necessary to protect secure HTTPS websites against downgrade attacks. It also aids protection against cookie hijacking. It allows web servers to declare that web browsers should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. Browser knows the hostname (but it does not know in that host Ambari is running or Ranger) Ambari might be setting (it because ambari is set for SSL) that property to some age. But ranger does not want it as it is not set for ssl. So when from the same browser you are hitting the Ranger ... because of same hostname it might be sending that header back. Ambari 2.7 had some issue regarding setting those params to 0. But if you are using Older version of amabri then it should work fine. https://issues.apache.org/jira/browse/AMBARI-25159 Hence i suggested you to try setting them to 0 in ambari.properties foillowed by AmbariServer restart (hit browser in incognito mode) , it works fine in 2.6.x versions. (by the way what is your ambari version) http.strict-transport-security=max-age=0
views.http.strict-transport-security=max-age=0
... View more
06-06-2019
12:59 PM
@Aishwarya Dixit Can you please try this If you are trying to access the Ranger UI link Via Ambari Quicklinks and if it is getting redirected to Https Ranger url. 1. Stop Ambari Server. # ambari-server stop 2. Edit the "/etc/ambari-server/conf/ambari.properties" file and update the values of these two properties as 0 http.strict-transport-security=max-age=0
views.http.strict-transport-security=max-age=0 3. Restart Ambari Server. # ambari-server start . 4. Open Fresh Incognito Mode Browser (to avoid any browser caching issue) Then try to access the links for ranger.
... View more
06-06-2019
10:43 AM
@forest lin It looks like your ambari agent might not be running. 1. Please try this: # ambari-agent restart 2. Can you please check and share the ambari agent log ? "/var/log/ambari-agent/ambari-agent.log" 3. Please check what all processes are already running on that Sandbox? It might be possible that few components might already be running but agent might not be able to show the proper status to Amabri Server due to some broker communication / OR agent down issue. 4. Please check the free memory available on the sandbox host: # free -m 5. Also after restarting the ambari server do you see any error in the ambari-server.log? .
... View more
06-06-2019
12:11 AM
@Rahul Borkar May be you can try with the following kind of template: (Please see the attached file.) EDITED_hadoop-env_Template.txt
... View more
06-06-2019
12:09 AM
@Rahul Borkar Ambari Uses Jinja templates to create the files using the templates. Jinja has specific rules for variable substitution and is very strict about the missing quotes. Your template has few issues like it has incorrectly entered new line characters at many places when {{VARIABLE}} is occurring in your template ... then it is converted to {
{VARIABLE}} Example: (in your case) export JAVA_HOME={
{java_home}} Ideally it should be export JAVA_HOME={{java_home}}
Similarly your hadoop-env template has many new lines added in between {{ brackets. The other main issue is that your last line as missing quotation mark #This is Rahul
export HADOOP_NAMENODE_OPTS="${HADOOP_NAMENODE_OPTS} -javaagent:/home/sshhdfsuser/jmx_prometheus_javaagent-0.11.0.jar=19850:/home/sshhdfsuser/namenode.yml Ideally it should be #This is Rahul
export HADOOP_NAMENODE_OPTS="${HADOOP_NAMENODE_OPTS} -javaagent:/home/sshhdfsuser/jmx_prometheus_javaagent-0.11.0.jar=19850:/home/sshhdfsuser/namenode.yml" . .
... View more
06-05-2019
11:07 PM
@Chris Parrinello It seems to be working fine for me, i just tested couple of times. If you are still facing any issue then can you please share the output of the following curl command .... may be there is some proxy issue : # curl -iv https://repo.hortonworks.com/content/repositories/releases/org/apache/hive/ .
... View more
06-05-2019
12:28 PM
@Rahul Borkar In an ambari managed cluster any changes made manually inside the scripts like "hadoop-env.sh" will be reverted back as soon as we restart those components from Ambari UI Because ambari will push the configs which are stored inside the ambari Db for those script templates to that host. Hence for making such changes you must use the Ambari UI / APIs , Like "Advanced hadoop-env" from ambari. After you made those changes from Ambari UI do you actually see the mentioned Java arguments when you try to run the following commands? # ps -ef | grep -i NameNode
# ps -ef | grep -i DataNode A. If you do not see those "javaagent" options in the above commands output then it means that those changes were not applied properly. In that case please share the full "Advanced hadoop-env" template from ambari UI so that we can check if it was applied properly or not? B. If you are able to see the those "javaagent" options in the above process list command output then there may be something wrong in the either the file permissions "/home/hduser/jmx_prometheus_javaagent-0.11.0.jar" and "/home/hduser_/datanode.yml" Or the YAML file content might not be appropriate. As DataNode and NameNode processes run as "hdfs" user normally so please chekc the file permission and ownership. Can you please share the /home/hduser_/datanode.yml file content as well ?
... View more
06-05-2019
12:15 PM
@Rupak Dum Can you please try the below step and then see if it works for you? 1. Login to Ambari UI and then click on HDFS --> Actions (Drop Down) --> Run Service Check. 2. Once the HDFS Service check runs successfully then try to create a new File View Instance from : Ambari UI --> Admin (Drop Down at top right corner) --> Manage Ambari --> Views --> File View (Click on create new instance button and create a new FileView instance and then try to access the new File View Instance and then check if you are still getting the same error?
... View more