Member since
09-25-2015
356
Posts
382
Kudos Received
62
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1032 | 11-03-2017 09:16 PM | |
894 | 10-17-2017 09:48 PM | |
1383 | 09-18-2017 08:33 PM | |
1540 | 08-04-2017 04:14 PM | |
1683 | 05-19-2017 06:53 AM |
05-09-2018
06:03 AM
Can you verify that the setup for View is correct? Check the following doc: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.1.5/bk_ambari-views/content/settings_and_cluster_configuration.html Pay close attention to the "HiveServer2 JDBC URL" parameter, it should be pointing to the JDBC URL for the HSI, typically that URL will contain the string "zooKeeperNamespace=hiveserver2-hive2"
... View more
05-09-2018
05:55 AM
When you enabled Interactive Query, a new service was started Hive Server Interactive. It has its own JDBC url, that is what you will use when connecting through beeline. You can use -f argument in beeline to run query from a file.
... View more
03-27-2018
09:04 PM
1 Kudo
The process owner for Hive LLAP daemon process has to be the same as the query executing user. When Hive Server Interactive (aka HSI) is brought up by Ambari, the LLAP daemons are owned by user hive. The executing user will also need to be hive which is what happens when hive.server2.enable.doAs is false. The intended usage of HSI is to be with impersonation disabled (hive.server2.enable.doAs=false). The end user authorization is assumed to be handled by Ranger Auth or SQL Standard Auth.
... View more
12-22-2017
07:32 PM
Can you check the logs on the HS2 server which you are pointing your JDBC client against?
... View more
12-22-2017
07:19 PM
1 Kudo
Can you look into the Yarn RM UI to see if there is any capacity to launch applications? Perhaps that's what may be blocking your CLI to initialize and offering you the prompt. You can also use beeline CLI as suggested by @Sonu Sahi if your queries don't interact with local resources on client host.
... View more
11-03-2017
09:16 PM
2 Kudos
Hive 2 is not exposed as a service that you add during install. It gets deployed and configured from the Hive config screen. To enable, go to the Hive config screen and flip the "Enable Interactive Query" slider. It will then bring up a popup and guide you through the configuration.
... View more
10-23-2017
11:09 PM
1 Kudo
By design encryption zone cannot be created on non-empty directories. You will have to empty the directory, create the encryption zone and then recopy the files back.
... View more
10-20-2017
10:30 PM
1 Kudo
This may be due to HDFS default replication set to 3. So the total space requirement would be 800GB * 3 = 2.4TB which exceeds 2.3TB that's available.
... View more
10-17-2017
09:48 PM
1 Kudo
The support matrix lists the OS that are officially supported, this means that they have gone through extensive deployment and certification testing. Choosing an OS for official support has criterions including the user/customer adoption, expect it in the upcoming update release on HDP-2.6.x. You can still go ahead and use the centos6 repo to install on CentOS 6.9 but Hortonworks will not officially support it.
... View more
10-06-2017
07:25 PM
1 Kudo
You could use Ambari Configuration groups to achieve that. See this thread.
... View more
09-29-2017
08:18 PM
1 Kudo
HiveServer Interactive deployment is a multi-step process, step one is the deployment of the LLAP daemons on YARN cluster through Slider and second step is the starting of the HiveServer2 talking to the LLAP daemons. Ambari currently exposes this as a one click process and internally handles both the above steps. HIVE-9883 added the support for deploying the LLAP daemons using Slider. The rationale is that Slider manages the lifecycle of the LLAP daemons helping install on the nodes and any other deployment complexity. I don't see any way to skip the LLAP daemon deployment without Slider.
... View more
09-29-2017
07:57 PM
1 Kudo
In HDP the recommendation is to use embedded metastore for HiveServer2. That is why Ambari hiveserver2 startup script explicitly passes the following config during startup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris=" " To achieve what you are doing, you will need to manually start the hiveserver2 removing the above parameter in the startup command so that it can use the one you set in hive-site.xml or override the hive.metastore.uris to your external metastore in the startup command. Check out the instructions here.
... View more
09-29-2017
07:45 PM
3 Kudos
In case of access through Hive Interactive Server, the Hive LLAP daemons are not running, see the following error snippet in your posted log: ERROR : Error reported by TaskScheduler [[2:LLAP]][SERVICE_UNAVAILABLE] No LLAP Daemons are running I would suggest you to restart your HiveServer Interactive and then try. For Spark access there have been some issues with reading the data in transaction tables, can you do a MAJOR compaction from Hive cli (make sure Hive Metastore has ACID enabled) before querying through Spark shell.
... View more
09-28-2017
05:24 PM
1 Kudo
Can you check the state of the YARN application (on YARN RM UI) corresponding to the query, perhaps YARN doesn't have capacity to run the application?
... View more
09-28-2017
05:16 PM
1 Kudo
You can do the following to change the location of data for the table: ALTER TABLE <table_name> SET location '<location_in_cluster_2>';
... View more
09-27-2017
10:35 PM
1 Kudo
HDP 2.6.1 ships with two Hive binaries - 1.x and 2.x. To use the Hive 2.x you will need to enable interactive query through Hive configs. This will enable new Hive service - HiveServer2 Interactive Server. Note that the hive cli is only available on Hive 1, to use Hive 2 you will need to connect using beeline to the Hive Server Interactive JDBC url, the URL will be available on the Hive Config Summary tab once you enable the interactive query. You can find detailed instructions here.
... View more
09-26-2017
09:12 PM
1 Kudo
Have you given that a try, setting hive.metastore.warehouse.dir in hive-site.xml under spark conf.
... View more
09-26-2017
05:30 PM
1 Kudo
The property you want to change is "hive.metastore.warehouse.dir".
... View more
09-26-2017
05:27 PM
1 Kudo
Can you post the application log from the failed application application_1506422992590_0008? You can collect that by running yarn logs -applicationId application_1506422992590_0008 > app_logs.txt
... View more
09-22-2017
04:48 PM
2 Kudos
Reading Hive ACID ORC data has some issues, it seems the data is not visible unless you compact the table atleast once, see SPARK-16996. Hive ACID tables ORC data operations are currently not supported from Spark, see SPARK-15348.
... View more
09-22-2017
04:37 PM
If Flume was used to populate Hive (througn HiveSink) then the data should have been written properly. Can you check the Flume log to see if the data was successfully getting written to Hive? Note that ACID needs to be enabled on Hive Metastore before streaming data from Flume into Hive.
... View more
09-22-2017
05:08 AM
1 Kudo
One way the issue can happen is if the table schema mismatched with the schema of the orc data. You can get more detail on the orc data by running the following command: hive --orcfiledump <location-of-orc-file> Can you verify that schema mismatch is not the case?
... View more
09-20-2017
05:08 PM
1 Kudo
How did you populate the data from simple4 table? As you mentioned that you can query the table without TBLPROPERTIES ("transactional"="true") tells me that the orc data currently in the table is non transactional. Note that reading/writing to an ACID table from a non-ACID session is not allowed. So, just adding the "transactional"="true" to the table properties does not change a non-transactional table to transactional. Please refer to Hive Transactions Wiki for more information. If you want to convert an existing non-transactional table to transactional (aka ACID) table, you can do something like the following in an ACID session set ...;
insert into table simple4 partition (updatetime) select id, value, updatetime from <simple4_nontransactional_table>;
... View more
09-20-2017
04:21 PM
2 Kudos
Latest update release on any HDP release will buy you the most mileage, so HDP 2.6.2 would be your best bet. Data would not be changed, however metadata potentially might get impacted as it gets upgraded. As with any HDP upgrade you should backup your metadata. Also look at the HDP-2.6.2 release notes for more details on the changes.
... View more
09-18-2017
08:33 PM
1 Kudo
Typically BigSQL should be installed on 3 nodes at minimum, with BigSQL Master node on one node and BigSQL worker node on the other two. Also establish passwordless ssh for root user from BigSQL master node to the worker nodes. You can find detailed instructions from IBM BigSQL Install document. Make sure that the pre-requisites are followed before installing BigSQL.
... View more
09-18-2017
08:18 PM
2 Kudos
How many rows are you expecting from the output of this query? Beeline client in its default setting is prone to memory issues when large number of rows get fetched as it buffers all rows in memory. The workaround is to use the following argument when starting beeline. --incremental=true HIVE-7224 which was recently fixed will make that setting as default but for now you will need to use the above argument to beeline client. More documentation on that option and others is on Hive Wiki.
... View more
08-09-2017
03:34 AM
1 Kudo
Can you post the detailed stack trace?
... View more
08-04-2017
06:34 PM
1 Kudo
So indeed it was a corrupted client, in that case please accept my first answer.
... View more
08-04-2017
06:22 PM
1 Kudo
I am assuming this is an unsecure cluster, is that right? On that machine, can you put out the complete console log from the terminal and i would suggest just starting the beeline and then put the connect statment: $ beeline
...
> !connect jdbc:hive2://<hivehost>:10000 <user> <pwd>
... View more
08-04-2017
04:26 PM
1 Kudo
The error indicates that WebHDFS service (REST API for HDFS) is unavailable. Can you search for "webhdfs" in HDFS config and make sure that its enabled? Also restart your Namenode service and then try.
... View more