Member since
11-11-2019
634
Posts
33
Kudos Received
27
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 260 | 10-09-2025 12:29 AM | |
| 4771 | 02-19-2025 09:43 PM | |
| 2124 | 02-28-2023 09:32 PM | |
| 4002 | 02-27-2023 03:33 AM | |
| 26008 | 12-24-2022 05:56 AM |
09-22-2021
01:38 AM
You can find number of connections to HMS using lsof -p <HMS PID> | grep -i "ESTABLICHSED" | wc -l For checking number of db/table,you have to rely on hiveserver2 logs . We don't have any STATS to check how many times a particular DB/Table is accessed.
... View more
09-22-2021
01:14 AM
You have to use com.cloudera.hive.jdbc.HS2Driver class for the driver. I don't have expertise on Websphere ,I can not help with JNDI.
... View more
09-17-2021
09:58 AM
We dont have steps realted to IBM Websphere. I am going to provide generic steps : As IBM Webspehre is a java application,so you need a JDBC driver to connect to hive. 1. Download the JDBC driver from Cloudera website https://www.cloudera.com/downloads/connectors/hive/jdbc/2-6-15.html 2. Use the driver in the Websphere classpath and check using lsof -p <webspehere pid> | grep <driver name> ==> igf it is loaded after adding 3. In Webspehere Datasource URL ,use hostname of hiveserver2 and port(probably 10000 if binary). The URL looks like as below: jdbc:hive2://<hiveserver host>:10000/default 4. Test the datasource(I was handling Weblogic,and there is an option to test datasource URL) 5. If everything works good,test the application. Note: Please check your Websphere documentation if Hive is a supported database before implementing the above action plan. If your Webspeheis SSL and Kerberos enabled,then the jdbc URL would change,I would suggest to test with non-SSL Websphere.
... View more
09-16-2021
10:06 AM
1 Kudo
@Sayed016 This error occurs if the HDFS files are under replicated. Did you see HDFS is under replicated when the issue occurred? Wait for the HDFS to gets replicated and you can query
... View more
09-13-2021
11:48 PM
Could you please attach hive on Tez logs. Please start Hive on Tez and wait till it crash completely and the provide the logs. Please attach the logs from below location: a. /var/logs/hive b. /var/run/cloudera-scm-agent/process/<hove on tez>/logs Please also try setting hive.server2.tez.initialize.default.sessions=false in CM and check.
... View more
09-09-2021
09:34 AM
Is the query failing or you are just noticing the errors in the logs? It seems to be ACID managed table. Please provide the hiveserver2 logs and the exact query fired.
... View more
09-09-2021
09:07 AM
DISTCP or import/export is not supported for ACID tables. You need to follow below mechanism: Distscp for ACID is not supported ,you have 2 approaches: Approach 1 ============= 1. Assuming that you have ACID in source and target clusters. 2. Create a external in source and target clusters. 3. Copy the data from ACID TO external in SOURCE CLUSTER INSERT into external select * from acid. 4. Perfrom distscp from source to target for external table. 5. Copy the data from external TO ACID IN SOURCE CLUSTER INSERT into acid select * from external. Approach 2 ========= Use DLM Refrence: https://community.cloudera.com/t5/Support-Questions/HIVE-ACID-table-Not-enough-history-available-for-0-x-Oldest/td-p/204551
... View more
09-09-2021
08:06 AM
I see you are getting belw issue java.io.IOException: Previous writer likely failed to write hdfs://sunny/tmp/hive/hive/_tez_session_dir/1b689cf2-9a2e-4afc-96a7-bdeef34ed887/hive. Failing because I am unlikely to write too. Have you copied Managed table data using distcp or import/export from other cluster ?
... View more
08-31-2021
11:52 PM
WHat is your CDH version ? Provide the application logs. Please review if pre-reqs has been done as per https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/admin_hos_oview.html Please review the FAQ
... View more
08-25-2021
01:07 AM
You need to provide ALL access to the User in the Ranger. Please check the ranger permission settings in Ranger UI
... View more