Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2446 | 04-27-2020 03:48 AM | |
4883 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3220 | 04-13-2020 08:53 PM | |
4925 | 03-31-2020 02:10 AM |
06-10-2019
02:30 PM
1 Kudo
@Matas Mockus Alert notification is not related with the Service Auto Start feature. Those features will work separately. So if you have configured any notification like Email Alert Notification then once the component goes down the alert will still be triggered normally based ont he alert trigger interval set for those individual alerts telling that the component went down (Service Auto Start feature will work independently).
... View more
06-10-2019
02:26 PM
@sugata kar Based on the error you posted it looks like you are using Oracle DB and Oracle JDBC Driver instead of MySQL. ERROR manager.SqlManager: Error executing statement: java.sql.SQLRecoverableException: IO Error: Invalid connection string format, a valid format is: "host:port:sid"
java.sql.SQLRecoverableException: IO Error: Invalid connection string format, a valid format is: "host:port:sid"
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:489) . So you might want to try something like following. Please make sure that the Oracle JDBC URL is in correct format as following: Example URL jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=serverne)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=servicename))) Example Sqoop command # sqoop import -Dmapreduce.map.java.opts="-Djavax.net.debug=all -Djavax.net.ssl.keyStore='/PATH/TO/YOUR_TRUST_STORE.jks' -Djavax.net.ssl.keyStorePassword=ZZZZZZZZ -Djavax.net.ssl.keyStoreType=JKS" --connect "jdbc:oracle:thin:@ORACLE_DB_HOST:1521:TESTDB" --username "AAAAAAAAAA" --password "BBBBBBBBB" --table "customer" --verbose . In order to get more information regarding "Oracle JDBC Connectivity Over SSL using Thin Driver" Please refer to the following link: https://www.oracle.com/technetwork/topics/wp-oracle-jdbc-thin-ssl-130128.pdf
... View more
06-10-2019
02:12 PM
1 Kudo
@Matas Mockus Auto Start of components should work if the following conditions are met: 1. The components were not gracefully stopped (via Ammari API calls or via UI). Means the components were abruptly killed/terminated due to Host reboot or due to some errors in those componentes. 2. As soon as the Host comes up at least Ambari Agent should be running as a service so that as soon as the host reboots the Ambari Agent comes up automatically. This is needed because the Ambari Agent sends the current state info of the components present in that host to Ambari Server and then Ambari checks the "desired state" of the components with the "actual State" of those components sent by the ambari agent. If the desired state of those component does not match (support in ambari DB the desired state is "STARTED" but agent says that those components are actually Down) then Ambari Server will send a recovery instruction to agents due to the AutoStart Setting.
... View more
06-10-2019
01:50 PM
@Rohit Sharma Your recent error indicates that there is some inconsistency in your Ambari DB. Due to this inconsistency the Ambari UI for Alerts page is not showing any alert details. Error Processing URI: /api/v1/clusters/cluster-name/alerts - (java.lang.NullPointerException) null . You can follow the below article to get this issue fixed. Please make sure to take a fresh Amabri DB dump for backup purpose before following the below article. https://community.hortonworks.com/content/supportkb/174817/ambari-alert-page-is-invisible-on-ambari-screen.html After performing the steps mentioned in the above article please do not forget to restart Ambari Server.
... View more
06-10-2019
01:36 PM
1 Kudo
@Matas Mockus Ambari provides Auto Start feature for the component which went down abnormally. https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/managing-and-monitoring-ambari/content/amb_enable_service_auto_start_from_ambari_web.html . You can configure the ambari agent as a service so that it gets restarted upon System Reboot. And once the Ambari Agent is UP and running then it will keep sending the current state of other components to the Ambari Server. If those component went down abruptly then in Ambari DB the desired state for those component will be "Started" but those components are actually Down. So the Auto start will work in this case. Auto start of a component is based on its current state and "desired state". But if you manually stop the services/components then the auto start will not work because the ambari agent compares the current state of these components against the desired state which is stored inside the Ambari DB for that component, to determine if these components are to be installed, started, restarted or stopped.
... View more
06-10-2019
08:51 AM
1 Kudo
@Narendra Neerukonda Ambari 2.7 you can try following kind of API calls. # curl -k -H "X-Requested-By: ambari" -u admin:admin -X POST -d '{"RequestInfo":{"context":"Refresh YARN Capacity Scheduler","command":"REFRESHQUEUES","parameters/forceRefreshConfigTags":"capacity-scheduler"},"Requests/resource_filters":[{"service_name":"YARN","component_name":"RESOURCEMANAGER","hosts":"kerlatest2.example.com,kerlatest4.example.com"}]}' http://kerlatest1.example.com:8080/api/v1/clusters/KerLatest/requests Please replace the "kerlatest2.example.com,kerlatest4.example.com" with your YARN resource Manager hostnames, also replace the "kerlatest1.example.com:8080" with your Ambari Server hostname and port. And replace the "KerLatest" with your own cluster name. Following article might give some idea in Kerberized environments: https://community.hortonworks.com/content/supportkb/151093/how-do-i-refresh-yarn-capacity-scheduler-outside-o.html
... View more
06-10-2019
06:00 AM
@Nani Bigdata As you mentioned that "netstat" command itself is not showing that the port 10001 is opened then definitely you can not connect to it using "beeline" or "telnet" utilities. Check the following and then try to provide us as much information which we requested in previous post. All the command outputs and the configs which we requested you to check. Check-1). Hence please check your HiveServer2 log first to find out what might be wrong? Please restart HS2 and then collect a fresh log from "/var/log/hive/hiveserver2.log" Check-2). What is the value of following hive properties? "hive.server2.transport.mode" "hive.server2.thrift.http.path" Check-3). Please share the output of the following (or screenshot of the exact error) Ambari UI --> Hive --> Summary (Tab) --> "HiveServer2 JDBC URL" --> Click on the at the right side of the URL to copy the URL and then try this URL with beeline once. As you mentioned that you got "no such file or directory found." So please share the complete output of the error . Check-4). May be your "hive.server2.transport.mode" is set to "binary" (default value) so in that case please check if the following approach works on port 10000 (instead of http mode 10001). # beeline -u "jdbc:hive2://headnodehost:10000/default" .
... View more
06-10-2019
05:58 AM
1 Kudo
@Ankit Singhal Your JDBC URL is not in correct form: jdbc:oracle:thin@//hostname:1521/Databasenae Please try like following instead: jdbc:oracle:thin:@hostname:1521:Databasename
... View more
06-09-2019
12:59 AM
@Vishal Bohra Have you recently placed any new hive-exec jar in your file system? Or Upgraded HDP by any chance? What is your HDP version ? Can you please share the output of the following command from the spark2 thrift server host? # hdp-select | grep -e "hive\|spark" . We see the following error: java.lang.NoSuchMethodError: org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server.startDelegationTokenSecretManager(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/Object;Lorg/apache/hadoop/hive/thrift/HadoopThriftAuthBridge$Server$ServerMode;)V Above error indicates that you might have an incorrect verioon of JARs in the classpath (which might have happened when some of your JARs might not be upgraded or mistakenly some jars of incorrect version are copied to the spark2 thrift server jars lib) . Based on the error it looks like you might have a slightly conflicting version of "hive-exec*.jar" JAR inside the host where you are running the "spark2 thrift server" Can you please check scan your file system and find out which all places you have this JAR and what is the version? You can use the following approach to locate/find the "hive-exec" jars? # yum install mlocate -y
# updatedb
# locate hive-exec | grep jar . Once you find the JAR then try checking the version is correct or not ? (It should be matching your HDP version) For example if you are using HDP 2.6.5.0-291 version then the hive-exec jar should look like following "hive-exec-1.21.2.2.6.5.0-292.jar". You can run the following command to find out the Signature of the method which is listed in the above error: Example in HDP 2.6.5 checking the signature of "startDelegationTokenSecretManager" method # /usr/jdk64/jdk1.8.0_112/bin/javap -cp /usr/hdp/current/spark2-thriftserver/jars/hive-exec-1.21.2.2.6.5.0-292.jar "org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge\$Server" | grep startDelegationTokenSecretManager
public void startDelegationTokenSecretManager(org.apache.hadoop.conf.Configuration, java.lang.Object, org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$ServerMode) throws java.io.IOException; Similarly check which "hive-exec-*.jar" JAR has a slightly different signature in your filesystem. Then try to remove the conflicting JAR from the classpath and then try again. . .
... View more
06-08-2019
11:30 PM
@Nani Bigdata Few additional checks: 1. On the HiveServer2 host can you check which all ports it has opened and listening to? You can run the following command to findout the ports used. # netstat -tnlpa | grep `cat /var/run/hive/hive-server.pid` 1(A). Also from any of the Cluster node try to check if you are able to access the HS2 and port 10001 using telnet or netcat to verify any port blocking/network issue? (From beeline run the following command) # telnet 10.0.0.14 10001
# telnet headnodehost 10001
(OR)
# nc -v 10.0.0.14 10001
# nc -v headnodehost 10001 2. If you are not able to see the port "10001" then then you will need to check the hive configs and the Hive Logs to find out if there are any errors? 3. Also can you double check your Hive Config to find out if it is actually set for "http" mode or "binary" mode in the following property: "hive.server2.transport.mode" 4. In your HS2 connection URL i see that you are missing the "httpPath". Can you please try adding the "httpPath" there something like following and then see if that works? You can find the httpPath value from your HS2 configurations by looking at the property "hive.server2.thrift.http.path" Example: # beeline -u "jdbc:hive2://headnodehost:10001/default;transportMode=http;httpPath=cliservice . 5. For troubleshooting purpose try connecting to beeline in Interactive mode as suggested by @Geoffrey Shelton Okot to verify that you are able to connect to HS2 or not at an alternate option? Side note: You can also try to connect the HS2 using the Dynamic discovery mode something as described in the doc: https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/fault-tolerance/content/dynamic_service_discovery_through_zookeeper.html Ambari UI --> Hive --> Summary (Tab) --> "HiveServer2 JDBC URL" --> Click on the at the right side of the URL to copy the URL and then try this URL with beeline once. . .
... View more