Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2530 | 04-27-2020 03:48 AM | |
| 4999 | 04-26-2020 06:18 PM | |
| 4091 | 04-26-2020 06:05 PM | |
| 3300 | 04-13-2020 08:53 PM | |
| 5040 | 03-31-2020 02:10 AM |
03-28-2017
04:50 PM
@zkfs
You are doing telnet on default port. Can you check your MySQL port and the use that connector port to see ambari is able to connect to mysql on that port? Disable firewall on MySQl port (default mysql port is 3306) Example: if "centos2" is your mysql host and mysql port is 3306 then you might check like following: # telnet centos2 3306 . Or check if your MySQL is using the same port? Else fix the N/W or firewall issue to allow access to that mysql port from ambari host.
... View more
03-28-2017
04:33 PM
@san ch It is hardcoded to return "-1" and work as designed so far. https://github.com/apache/hive/blob/master/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java#L681-L690 public int getUpdateCount() throws SQLException {
checkConnection("getUpdateCount");
/**
* Poll on the operation status, till the operation is complete. We want to ensure that since a
* client might end up using executeAsync and then call this to check if the query run is
* finished.
*/
waitForOperationToComplete();
return -1;
}
.
... View more
03-28-2017
04:26 PM
@zkfs Ambari log information indicates that you are using Postgres for Ambari Server which is OK. Because ambari might be using Postgres for ambari DB connectivity. Detected POSTGRES as the database typ from the JDBC URL . You are facing issue with HIve (mysql database) connectivity. So please check if you are following all the informations mentioned the below link to Verify if the mysql username & password is correct and has remote connection permission and Firewall issues.. As mentioned in: http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-reference/content/using_hive_with_mysql.html
... View more
03-28-2017
04:05 PM
1 Kudo
@zkfs
1. Do you see any error in ambari-server.log?
2. Do you have the mysql-connector-java.jar jdbc driver installed on ambari host and the symlink is present in the "/usr/share/java" directory? Else run the following command on the Ambari Server host to make the JDBC driver available and to enable testing the database connection.
ambari-server setup --jdbc-db=mysql --jdbc-driver=/path/to/mysql/mysql-connector-java.jar
3. From ambari host are you able to telnet to mysql host & port? (just to isolate the firewall/network issue).
4. Verify if the mysql username & password is correct and has remote connection permission. As mentioned in: http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-reference/content/using_hive_with_mysql.html
.
... View more
03-28-2017
03:45 PM
1 Kudo
@PJ The following doc talks about most of the things that you are looking out for: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/before_you_begin.html Like in order to install Hue you need a host that can have a 1. Supported OS. (listed int he above doc) 2. Should have access to Hadoop Cluster (only one hadoop cluster) 3. You need to use a supported Database (mentioned in the doc link) 4. Python 2.6.6 or higher installed. 5. You need to have core Hadoop on your system . 6. The HDP repositories should be available to that host where you are planning to install Hue. (remote repo) 7. You can deploy Hue on any host within your cluster. If your corporate firewall policies allow, you can also use a remote host machine as your Hue server. For evaluation or small cluster sizes, use the master install machine for HDP as your Hue server. 8. There is no single hardware requirement for installing HDP, there are some basic guidelines. A complete installation of HDP 2.5.3 consumes about 6.5 GB of disk space. So Hue should not take much space. .
... View more
03-28-2017
03:06 PM
@n c You can Dump Hive database on old cluster as mentioned in : "Hive Metastore Database Backup and Restore" section of the following doc: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-upgrade/content/getting-ready-24.html Then copy the DB dump file to new DB host machine and then manually edit dump file to point to new clusters NameNode location. Now create new Hive database on new DB host and then import the DB dump and use the same privileges schema as on previous database.
Before starting the Hive services upgrade the hive database by using schemaTool. You can use metatool to update the HDFS locations to the new cluster. Start the hive services.
... View more
03-28-2017
02:00 PM
1 Kudo
@Sanjib Behera From your screenshot it looks like you are using kafka in a kerberozed environment so can you please check if youa re able to get a valid kerberos ticket? # kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/something@EXAMPLE.COM
# klist Also please try running the producer again after setting the following properties? # export
KAFKA_CLIENT_KERBEROS_PARAMS="-Djava.security.auth.login.config=/usr/hdp/current/kafka-broker/config/kafka_client_jaas.conf"
. Additionally check if the following is set properly ? https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_storm-user-guide/content/stormkafka-secure-config.html spoutConfig.securityProtocol=PLAINTEXTSASL .
... View more
03-28-2017
07:27 AM
@Josh Persinger Sometimes it happens when Oozie is having trouble referencing the proper oozie-sharelib-hive-<HDP Version>.jar
You should try to regenerate the oozie sharelib 1. Get a listing of hive sharelib content:
# oozie admin -oozie http://${OOZIE_HOST}:11000/oozie -shareliblist hive* > /tmp/hive_shareliblist_OLD.txt 2>&1
2. Now we will recreate the sharelib # /usr/hdp/<HDP Version>/oozie/bin/oozie-setup.sh sharelib create -fs hdfs://${NAMENODE}
3. Again list the hive sharelib contents now. # oozie admin -oozie http://${OOZIE_HOST}:11000/oozie -shareliblist hive* > /tmp/hive_shareliblist_NEW.txt 2>&1
4. For double verification list all hive sharelib directories in HDFS # hdfs dfs -ls -R /user/oozie/share/lib/*/hive/* > /tmp/hive_libs_on_hdfs.txt 2>&1
5. Edit the "workflow.xml" and remove the line: <property>
<name>oozie.libpath</name>
<value>${nameNodeHost:8020}/user/oozie/share/lib/lib_20170116233431</value>
</property>
6. Modify the "job.properties" file and set the following property to true. The restart the oozie/job oozie.use.system.libpath=true . .
... View more
03-28-2017
06:56 AM
@ARUN
Please see the above screenshot showing how to use only "HDFS" option for audits.
... View more
03-28-2017
06:49 AM
@ARUN
Are you looking out for this: 1. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_Ranger_Install_Guide/content/save_audits_to_hdfs.html
2. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_Ranger_Install_Guide/content/audit_to_solr.html
It is recommended that Ranger audits be written to both Solr and HDFS. Audits to Solr are primarily used to enable queries from the Ranger Admin UI. HDFS is a long-term destination for audits -- audits stored in HDFS can be exported to any SIEM system, or to another audit store. .
... View more