Member since
01-04-2016
409
Posts
313
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5741 | 01-16-2018 07:00 AM | |
1901 | 09-13-2017 06:17 PM | |
3811 | 09-13-2017 05:58 AM | |
2400 | 08-28-2017 07:16 AM | |
4170 | 05-11-2017 11:30 AM |
10-03-2017
03:12 PM
@Ashnee Sharma You have to install this driver on Client side and use is to connect to Hive with all the details. Also Check this link as well, https://community.hortonworks.com/questions/15667/windows-hive-connection-issue-through-odbc-using-h.html
... View more
10-03-2017
12:47 PM
@Geoffrey Shelton Okot My issue is resolved. I have configure KDC server on different machine. Thanks for the help...!!!
... View more
09-13-2017
06:18 PM
Before PHOENIX-3288 is resolved, please set phoenix.schema.isNamespaceMappingEnabled to true. hbase-site.xml should be on the classpath
... View more
09-13-2017
05:58 AM
Issue was with permission of folder. I have provided appropriate permission to /tmp/ and /apps/hive folder.
... View more
08-28-2017
07:16 AM
Hi issue is resolved. Issue was with azure cloud. There azure storage jar was updated. Updated jar: hadoop/lib/azure-storage-4.2.0.jar
The supported version of the JAR is 2.2.0. I have reverted the version to old. And it works.
... View more
05-11-2017
12:23 PM
@Ashnee Sharma Good article. Thanks for sharing!
... View more
03-14-2017
12:25 PM
2 Kudos
@Ashnee Sharma You can set up priorities for your mapred jobs using "set-priority job-idpriority" . More information in below link https://hadoop.apache.org/docs/r2.7.2/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#job You can also set up preemption to make sure high priority jobs get the desired resources . https://community.hortonworks.com/questions/8725/capacityscheduler-job-priority-preemption.html
... View more
02-01-2017
02:08 PM
@Ashnee Sharma here you go http://docs.hortonworks.com/HDPDocuments/SS1/SmartSense-1.3.1/bk_user-guide/content/activity_explorer.html go through each page, it says specifically the level of detail per component. In the next release, we're going to add cluster capacity projections and more extensive analysis of components. If this answers your question, please mark accept the answer as best.
... View more
09-13-2017
06:12 PM
@Roni I was facing same kind of issue. I have resolve this issue by using following steps:- 1) Edit Ambari->Hive->Configs->Advanced->Custom hive-site->Add Property..., add the following properties based on your HBase configurations(you can search in Ambari->HBase->Configs): custom hive-site.xml hbase.zookeeper.quorum=xyz (find this property value from hbase ) zookeeper.znode.parent=/hbase-unsecure (find this property value from hbase ) phoenix.schema.mapSystemTablesToNamespace=true phoenix.schema.isNamespaceMappingEnabled=true 2) Copy jar to /usr/hdp/current/hive-server2/auxlib from /usr/hdp/2.5.6.0-40/phoenix/phoenix-4.7.0.2.5.6.0-40-hive.jar /usr/hdp/2.5.6.0-40/phoenix/phoenix-hive-4.7.0.2.5.6.0-40-sources.jar If he jar is not working for you then just try to get following jar phoenix-hive-4.7.0.2.5.3.0-37.jar and copy this to /usr/hdp/current/hive-server2/auxlib 3) add property to custom-hive-env HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-server2/auxlib/4) Add follwoing property to custom-hbase-site.xmlphoenix.schema.mapSystemTablesToNamespace=true phoenix.schema.isNamespaceMappingEnabled=true
5) Also run following command 1) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hive/conf/hive-site.xml 2) jar uf /usr/hdp/current/hive-server2/auxlib/phoenix-4.7.0.2.5.6.0-40-client.jar /etc/hbase/conf/hbase-site.xml And I hope my solution will work for you 🙂
... View more
12-28-2016
08:50 AM
@Sagar Shimpi Thanks. This resolved the issue
... View more