Member since
04-11-2016
535
Posts
147
Kudos Received
77
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4104 | 09-17-2018 06:33 AM | |
896 | 08-29-2018 07:48 AM | |
1492 | 08-28-2018 12:38 PM | |
945 | 08-03-2018 05:42 AM | |
983 | 07-27-2018 04:00 PM |
07-19-2018
06:48 AM
@Avinash Kumar Can you check /var/log/hive/hiveserver2Interactive.log for any issues? Also, verify the query run from Hive interactive UI for Query progression.
... View more
07-19-2018
06:38 AM
@Harry Li Is Zookeeper running on the same host as beeline client host (localhost:2181)? If not, try substituting the localhost with Zookeeper host as below: jdbc:hive2://<zookeeper_host>:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2?tez.queue.name=default Also, to use Zookeeper service discovery, Hiveserver2 should be enabled for the service discovery. Verify if below property is enabled: hive.server2.support.dynamic.service.discovery=true
... View more
07-19-2018
06:30 AM
1 Kudo
@rganeshbabu
Inserting the data into ACID table or to a bucketed table from Pig is not supported, hence the error is seen. Workaround: 1. Load the data into non-transactional table. 2. From Hive client, load the data from non-transactional table into transactional table. insert into acid_table select * from non_acid_table;
... View more
07-19-2018
06:30 AM
@rganeshbabu Inserting the data into ACID table or to a bucketed table from Pig is not supported, hence the error is seen. Workaround: 1. Load the data into non-transactional table. 2. From Hive client, load the data from non-transactional table into transactional table. insert into acid_table select * from non_acid_table;
... View more
07-17-2018
05:59 AM
@Anjali Shevadkar It seems like you are hitting "Dag submit failed due to Invalid TaskLaunchCmdOpts defined for Vertex Map 1 : Invalid/conflicting GC options found". Please check and share the values of the following parameters "tez.am.launch.cmd-opts" and "hive.tez.java.opts" they should not be conflicting. Specially the GC options. Remove +UseParallelGC from either of the properties to address the issue. This is because "-XX:+UseG1GC and -XX:+UseParallelGC" Should never be used together."
... View more
07-16-2018
03:18 PM
1 Kudo
@Sambasivam Subramanian Please refer below links for adding nodes to existing cluster: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_administration/content/ref-4303e343-9aee-4e70-b38a-2837ae976e73.1.html https://www.youtube.com/watch?v=jiCI4yIkUlc
... View more
07-06-2018
07:25 AM
@Mudit Kumar You can refer Upgrade and Best practices guide for steps. However, it is best to involve Hortonworks Support for the upgrade checklists.
... View more
07-05-2018
10:28 AM
@Anjali Shevadkar I checked internally and below policy works: beeline> !connect jdbc:hive2://:2181,:2181,:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to jdbc:hive2://:2181,:2181,:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for jdbc:hive2://:2181,:2181,:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2: hive
Enter password for jdbc:hive2://:2181,:2181,:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2: ****
Connected to: Apache Hive (version 1.2.1000.2.6.5.0-292)
Driver: Hive JDBC (version 1.2.1000.2.6.5.0-292)
0: jdbc:hive2://xxx:> show databases;
+----------------+--+
| database_name|
+---------------+--+
| default|
| testdb |
+----------------+--+2 rows selected (0.362 seconds)
... View more
07-05-2018
09:49 AM
@Rajendra Manjunath You can use Metron 0.4.2 with HDP 2.6.5 including Kafka 0.10.1 and Storm 1.1.0.
... View more
07-05-2018
09:33 AM
@Anjali Shevadkar Can you share the screenshot of the Ranger policy and also, error snippet from Hiveserver2 log?
... View more
07-05-2018
09:19 AM
@Anjali Shevadkar Do you have Ranger policies defined for Hive? If yes, check if the login user has use/select on the databases. Also, you could check the 'show databases' from Hive Cli as it by-passes the Ranger.
... View more
07-05-2018
09:12 AM
@Rajendra Manjunath Can you more details on where this error is observed?
... View more
07-03-2018
04:29 PM
@zkfs Can you check your Falcon server is running? Also, check for errors under falcon-server logs.
... View more
07-03-2018
02:50 PM
@Doron Veeder When the cluster is upgraded, the metadata and data would be upgraded as well. If the concern is to do the Tables DDL, then it would be retained and upgraded to version of current Hive version.
... View more
07-03-2018
02:28 PM
@Doron Veeder Adding to Geoffrey's reply, when HDP version is upgraded the Hive metadata gets upgraded to version on 1.2.1.0000. Hence, the possibility of maintaining the two different versions of Hive services connecting to same database.
... View more
07-03-2018
08:42 AM
2 Kudos
@Siddarth Wardhan If you are using Tez as execution engine, then you need to set below properties: set hive.merge.tezfiles=true;
set hive.merge.smallfiles.avgsize=128000000;
set hive.merge.size.per.task=128000000;
... View more
07-03-2018
06:45 AM
Sure @Abhilash Chandrasekharan
... View more
07-03-2018
06:42 AM
1 Kudo
@Anjali Shevadkar Adding to Sunil's reply, when using Hive Cli, the Ranger authorization is by-passed and only storage level authorization is checked for the user against the Hive warehouse locations. When accessing through Beeline, Hiveserver2 honours the Ranger authorization set at Hive policies, hence only databases with select/use access for that user are displayed. For more details, refer below links: https://community.hortonworks.com/articles/2497/hive-cli-security.html https://community.hortonworks.com/articles/2497/hive-cli-security.html
... View more
07-02-2018
07:41 AM
@Robert Lake It seems like post LLAP enable, the YARN queue configuration are modified because of which the root user is not able to submit queries to the default queue. Check the YARN queue configurations and provide default queue some amount of resource to accommodate submitted applications . Or you could try running the query after setting "set tez.queue.name=<llap_queue_name>;". This enables the query to be submitted under LLAP queue than default queue. Similar HCC link.
... View more
07-02-2018
07:35 AM
@Asom Alimdjanov Can you verify if the httpclient*.jar is same under <installation>/hive/lib location?
... View more
07-02-2018
07:31 AM
1 Kudo
@Abhilash Chandrasekharan For now, MariaDB HA is not officially supported for Hive Metastore. You could try specifying the JDBC connection string as "jdbc:mysql://<uri1>,<uri2>/hive".
... View more
07-02-2018
07:29 AM
@heta desai When a Hive external table is created, then the Hive table would be pointing to existing Druid data source and Superset can to viewed from Druid datasource and queried upon.
... View more
06-25-2018
08:37 AM
@Shrikant BM You can connect to Hive services on different cluster as specified above. However, ideally it is not recommended because there are other configurations related to the HDFS and other dependent components.
... View more
06-25-2018
07:12 AM
@Snehal Shelgaonkar Can you check if hadoop.proxyuser.knox.groups=* and hadoop.proxyuser.knox.hosts=* are set?
... View more
06-25-2018
06:06 AM
@Nagendra Sharma I verified internally within the code and addBatch method is not supported even in the latest version of HDP. Below is the snippet from the code: public void addBatch() throws SQLException {
// TODO Auto-generated method stub
throw new SQLException("Method not supported");
}
... View more
04-18-2018
07:05 AM
2 Kudos
@Yishai
Bouganim
Microsoft JDBC driver 4.1 for MS SQL Server is not compatible with Java 1.8. Starting with the Microsoft JDBC Driver 4.2 for SQL Server, only Sun Java SE Development Kit (JDK) 1.8 and Java Runtime Environment (JRE) 1.8 are supported. Use compatible version of MS SQLServer JDBC with the given Java version.
For Java version 1.7, use sqlijdbc41.jar For Java version 1.8 use sqlijdbc42.jar REFERENCE MSDN Article MS378422 System Requirements for JDBC Driver
... View more
04-06-2018
07:20 AM
@Mohd Azhar What is the version of the Ambari in-use?
... View more
04-06-2018
07:17 AM
@Alexander Schätzle There is no known workaround for YARN-6625. The issue is fixed in HDP 2.6.3 (fixed issue doc). If you are using lower version, then you could consider upgrading to latest version with fix.
... View more
04-04-2018
06:30 PM
@Chethana K For Hive data, for better performance you can use ORC format with Snappy compression. Refer link.
... View more