Member since
09-21-2022
9
Posts
0
Kudos Received
0
Solutions
01-05-2024
07:56 AM
Do we have any alternative for this to do in impala or introduced any function to support in any newer version of impala? Thanks
... View more
10-23-2023
04:27 PM
jdbc:hive2://<host>:10000 Error: Could not open client transport with JDBC Uri: .... Invalid status 21 If i use just 'beeline' command, i am able to connect to hive. However if i use below command, i am getting exception. beeline -u jdbc:hive2://<host>:10000/default -n user -p pwd 23/10/23 23:22:31 [main]: WARN transport.TSaslTransport: Could not send failure response org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset 23/10/23 23:22:31 [main]: WARN jdbc.HiveConnection: Failed to connect to host:10000 log4j:ERROR Could not create an Appender. Reported error follows. java.lang.ClassCastException: org.apache.log4j.ConsoleAppender cannot be cast to com.cloudera.hive.jdbc42.internal.apache.log4j.Appender at com.cloudera.hive.jdbc42.internal.apache.log4j.xml.DOMConfigurator.parseAppender(DOMConfigurator.java:248) at com.cloudera.hive.jdbc42.internal.apache.log4j.xml.DOMConfigurator.findAppenderByName(DOMConfigurator.java:176) log4j:WARN No appenders could be found for logger (com.cloudera.hive.jdbc42.internal.apache.thrift.transport.TSaslTransport). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Unknown HS2 problem when communicating with Thrift server. Error: Could not open client transport with JDBC Uri: jdbc:hive2://host:10000/default: Invalid status 21 Also, could not send response: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset (state=08S01,code=0)
... View more
10-18-2023
02:30 PM
We are working on below configuration platform. Cloudera Version: CDP 7.1.2 Kerberos 5 version 1.15.1 Hive 3.1.3000.7.1.7.1000-141 Phoenix 5.1.5 After kerberos enabled on the platform, we are not able to connect to hive and phoenix from DBeaver or sql client.
... View more
Labels:
- Labels:
-
Apache Hive
-
Kerberos
02-13-2023
10:19 AM
Thanks @rki_ for the confirmation, do you have any information around will Phoenix going to be a packaged component as like today or is it have to be externally purchased/licensed as a separate component and install? Thanks Sagar
... View more
02-10-2023
10:25 AM
Any help on this is much appreciated.! Thanks Sagar
... View more
02-10-2023
10:25 AM
It would be helpful if anyone in the community has any clue on this? Thanks Sagar
... View more
02-06-2023
01:42 PM
Hi, Currently we are using CDP 7.1.7 with phoenix version 5.0. We were told by platform admins that cloudera is dropping support for phoenix from the next cdp version onwards. Is this true? If so, what are the alternative to phoenix are suggested? What is the strategy on embodying phoenix with CDP package? Thanks Sagar
... View more
Labels:
02-06-2023
01:25 PM
Hello Experts, My current platform is build based on below components. CDP 7.1.7 Hive 3 Spark 2.4.7 DeltaLibrary Hadoop 3.1 All our data tables are hive external tables in parquet format. At present we have pyspark streaming solution in place to read json data from a golden gate feed and process through pyspark and deltalibrary solution in cyclic fashion. For each cycle, when streaming data is processed on delta path, we merge the data from delta path to hive external path but during this update on hive path, if there is someone who performs a query on this table gets an error that HDFS I/O file not found error. To solve this, we wanted to perform automic update on hive external table such that read queries are not failed when updates happening. We have tried below ways but didn't work. 1) spark.sql 2) hive.sql 3) df.write (using jdbc connection) 4) Regular df.write with overwrite mode and overwrite = True None of these worked, however it looks like "insert into overwrite .. " query works fine via dbeaver -> hive or impala editor while read query is not failed. Please help with your suggestions to solve this via programmatically in pyspark way. Are external hive tables support automic operations? Thanks Sagar
... View more
12-19-2022
10:47 AM
We are having a CDP 7.1.7 private cluster with phoenix setup. We had created some phoenix tables long back with certain primary keys, however now when we are trying to check those primary keys on a table via phoenix shell, we are getting empty results. I have tried below commands and no information. !primarykeys <tablename> !column <tablename> !describe <tablename> select column_name from system.catalog where table_name='tablename' Phoenix version is 5.0.0 Please help.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
-
Apache Phoenix