Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2826 | 04-27-2020 03:48 AM | |
| 5497 | 04-26-2020 06:18 PM | |
| 4678 | 04-26-2020 06:05 PM | |
| 3709 | 04-13-2020 08:53 PM | |
| 5615 | 03-31-2020 02:10 AM |
06-10-2019
01:36 PM
1 Kudo
@Matas Mockus Ambari provides Auto Start feature for the component which went down abnormally. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/managing-and-monitoring-ambari/content/amb_enable_service_auto_start_from_ambari_web.html . You can configure the ambari agent as a service so that it gets restarted upon System Reboot. And once the Ambari Agent is UP and running then it will keep sending the current state of other components to the Ambari Server. If those component went down abruptly then in Ambari DB the desired state for those component will be "Started" but those components are actually Down. So the Auto start will work in this case. Auto start of a component is based on its current state and "desired state". But if you manually stop the services/components then the auto start will not work because the ambari agent compares the current state of these components against the desired state which is stored inside the Ambari DB for that component, to determine if these components are to be installed, started, restarted or stopped.
... View more
06-10-2019
08:51 AM
1 Kudo
@Narendra Neerukonda Ambari 2.7 you can try following kind of API calls. # curl -k -H "X-Requested-By: ambari" -u admin:admin -X POST -d '{"RequestInfo":{"context":"Refresh YARN Capacity Scheduler","command":"REFRESHQUEUES","parameters/forceRefreshConfigTags":"capacity-scheduler"},"Requests/resource_filters":[{"service_name":"YARN","component_name":"RESOURCEMANAGER","hosts":"kerlatest2.example.com,kerlatest4.example.com"}]}' http://kerlatest1.example.com:8080/api/v1/clusters/KerLatest/requests Please replace the "kerlatest2.example.com,kerlatest4.example.com" with your YARN resource Manager hostnames, also replace the "kerlatest1.example.com:8080" with your Ambari Server hostname and port. And replace the "KerLatest" with your own cluster name. Following article might give some idea in Kerberized environments: https://community.hortonworks.com/content/supportkb/151093/how-do-i-refresh-yarn-capacity-scheduler-outside-o.html
... View more
06-10-2019
05:58 AM
1 Kudo
@Ankit Singhal Your JDBC URL is not in correct form: jdbc:oracle:thin@//hostname:1521/Databasenae Please try like following instead: jdbc:oracle:thin:@hostname:1521:Databasename
... View more
06-09-2019
12:59 AM
@Vishal Bohra Have you recently placed any new hive-exec jar in your file system? Or Upgraded HDP by any chance? What is your HDP version ? Can you please share the output of the following command from the spark2 thrift server host? # hdp-select | grep -e "hive\|spark" . We see the following error: java.lang.NoSuchMethodError: org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server.startDelegationTokenSecretManager(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/Object;Lorg/apache/hadoop/hive/thrift/HadoopThriftAuthBridge$Server$ServerMode;)V Above error indicates that you might have an incorrect verioon of JARs in the classpath (which might have happened when some of your JARs might not be upgraded or mistakenly some jars of incorrect version are copied to the spark2 thrift server jars lib) . Based on the error it looks like you might have a slightly conflicting version of "hive-exec*.jar" JAR inside the host where you are running the "spark2 thrift server" Can you please check scan your file system and find out which all places you have this JAR and what is the version? You can use the following approach to locate/find the "hive-exec" jars? # yum install mlocate -y
# updatedb
# locate hive-exec | grep jar . Once you find the JAR then try checking the version is correct or not ? (It should be matching your HDP version) For example if you are using HDP 2.6.5.0-291 version then the hive-exec jar should look like following "hive-exec-1.21.2.2.6.5.0-292.jar". You can run the following command to find out the Signature of the method which is listed in the above error: Example in HDP 2.6.5 checking the signature of "startDelegationTokenSecretManager" method # /usr/jdk64/jdk1.8.0_112/bin/javap -cp /usr/hdp/current/spark2-thriftserver/jars/hive-exec-1.21.2.2.6.5.0-292.jar "org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge\$Server" | grep startDelegationTokenSecretManager
public void startDelegationTokenSecretManager(org.apache.hadoop.conf.Configuration, java.lang.Object, org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$ServerMode) throws java.io.IOException; Similarly check which "hive-exec-*.jar" JAR has a slightly different signature in your filesystem. Then try to remove the conflicting JAR from the classpath and then try again. . .
... View more
06-08-2019
10:33 PM
@Nani Bigdata The Spark2 History server allows us to review Spark application metrics after the application has completed. Without the History Server, the only way to obtain performance metrics is through the Spark UI while the application is running. Regarding the alert "SPARK2_JOBHISTORYSERVER_PROCESS" it is basically a host-level alert which is triggered if the Spark2 History Server cannot be determined to be up. Basically this alert checks the "spark2-defaults/spark.history.ui.port" port accessibility to determine if the Spark2 Job History server is UP and Running or not ? If not then this alert will be triggered.
... View more
06-07-2019
11:29 PM
@Adil BAKKOURI As we see this error: Call From node4.rh.bigdata.cluster/172.16.138.113 to node4.rh.bigdata.cluster:8020 failed on connection exception: java.net.ConnectException: Connection refused; So please verify these things first: (Also better to share those outputs here) 1. Login to your NameNode Host "node4.rh.bigdata.cluster" and verify if the port 8020 is listening when you are trying to access that port from other host? # netstat -tnlpa | grep 8020
# hostname -f
# cat /etc/hosts
# ifconfig *NOTE:* in the above netstat command output you do not see port 8020 is listening then you must have to check and share the NameNode logs "/var/log/hadoop/hdfs/hadoop-hdfs-namenode-*.log" to verify if it is showing any error ? Try to run the following command from Both the NameNodes one by one to see if both are returning correct results? # /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://node4.rh.bigdata.cluster:8020 -safemode get
#### AND from other NameNode also
# /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://$OTHER_NN_HOSTNAME:8020 -safemode get
3. Now login to the other host "node4.rh.bigdata.cluster" (172.16.138.113) Which seems to have the same Hostname. (OR if you see any other hostname in your log then login to that host) Which is trying to connect to NameNode and then verify if you are able to connect to NN host on port 8020? Verify of the NN host name is resolvable ? # telnet node4.rh.bigdata.cluster 8020
(OR)
# nc -v node4.rh.bigdata.cluster 8020
# cat /etc/hosts
# nslookup node4.rh.bigdata.cluster
# nslookup 172.16.138.113 .
... View more
06-07-2019
05:54 AM
@Aishwarya Dixit By Any chance do you have the Ranger and Ambari Server installed on the same host? Because by chance iif the Ranger and Ambari are installed on the same host then Ambari might be redirecting the URLs of components like Ranger here from HTTP to HTTPS. What strict-transport-security does? When using SSL, this will be used to set the Strict-Transport-Security response header. HTTP Strict Transport Security (HSTS) is a security policy which is necessary to protect secure HTTPS websites against downgrade attacks. It also aids protection against cookie hijacking. It allows web servers to declare that web browsers should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. Browser knows the hostname (but it does not know in that host Ambari is running or Ranger) Ambari might be setting (it because ambari is set for SSL) that property to some age. But ranger does not want it as it is not set for ssl. So when from the same browser you are hitting the Ranger ... because of same hostname it might be sending that header back. Ambari 2.7 had some issue regarding setting those params to 0. But if you are using Older version of amabri then it should work fine. https://issues.apache.org/jira/browse/AMBARI-25159 Hence i suggested you to try setting them to 0 in ambari.properties foillowed by AmbariServer restart (hit browser in incognito mode) , it works fine in 2.6.x versions. (by the way what is your ambari version) http.strict-transport-security=max-age=0
views.http.strict-transport-security=max-age=0
... View more
06-06-2019
12:59 PM
@Aishwarya Dixit Can you please try this If you are trying to access the Ranger UI link Via Ambari Quicklinks and if it is getting redirected to Https Ranger url. 1. Stop Ambari Server. # ambari-server stop 2. Edit the "/etc/ambari-server/conf/ambari.properties" file and update the values of these two properties as 0 http.strict-transport-security=max-age=0
views.http.strict-transport-security=max-age=0 3. Restart Ambari Server. # ambari-server start . 4. Open Fresh Incognito Mode Browser (to avoid any browser caching issue) Then try to access the links for ranger.
... View more
06-06-2019
10:43 AM
@forest lin It looks like your ambari agent might not be running. 1. Please try this: # ambari-agent restart 2. Can you please check and share the ambari agent log ? "/var/log/ambari-agent/ambari-agent.log" 3. Please check what all processes are already running on that Sandbox? It might be possible that few components might already be running but agent might not be able to show the proper status to Amabri Server due to some broker communication / OR agent down issue. 4. Please check the free memory available on the sandbox host: # free -m 5. Also after restarting the ambari server do you see any error in the ambari-server.log? .
... View more
06-05-2019
11:07 PM
@Chris Parrinello It seems to be working fine for me, i just tested couple of times. If you are still facing any issue then can you please share the output of the following curl command .... may be there is some proxy issue : # curl -iv https://repo.hortonworks.com/content/repositories/releases/org/apache/hive/ .
... View more