Member since
05-09-2024
27
Posts
17
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1470 | 08-29-2024 06:58 PM | |
1038 | 05-24-2024 04:05 AM | |
2282 | 05-24-2024 04:04 AM |
11-20-2024
08:19 AM
@hadoopranger Consider tuning parameters like idle_session_timeout and idle_query_timeout which is suspected to be closing the session considering you don't have LB in place. You can also set it to 0 where session will never expire until closed manually. Moreover, consider increasing value of fe_service_threads to allow more concurrent client connections which should help you avoiding similar client connection issue in future. For more info, refer: https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/impala-reference/topics/impala-recommended-configs.html
... View more
10-25-2024
06:57 AM
We’re attempting to run a basic Spark job to read/write data from Solr, using the following versions: CDP version: 7.1.9 Spark: Spark3 Solr: 8.11 Spark-Solr Connector: opt/cloudera/parcels/SPARK3/lib/spark3/spark-solr/spark-solr-3.9.3000.3.3.7191000.0-78-shaded.jar When we attempt to interact with Solr through Spark, the execution stalls indefinitely without any errors or results(similar to the issue which @hadoopranger mentioned). Other components, such as Hive and HBase, integrate smoothly with Spark, and we are using a valid Kerberos ticket that successfully connects with other Hadoop components. Additionally, testing REST API calls via both curl and Python’s requests library confirms we can access Solr and retrieve data using the Kerberos ticket. The issue seems isolated to Solr’s connection with Spark, as we have had no problems with other systems. Has anyone encountered a similar issue or have suggestions for potential solutions? @RangaReddy @hadoopranger
... View more
09-27-2024
05:56 AM
1 Kudo
Hey thanks for reaching me out @hadoopranger After kinit I am trying to connect via beeline: Please logs of the hive metastore: 2024-09-27 12:51:13,989 INFO org.apache.hadoop.fs.TrashPolicyDefault: [pool-5-thread-62]: Moved: 'hdfs://ip-172-31-13-77.ap-south-1.compute.internal:8020/user/hue/.cloudera_manager_hive_metastore_canary/cloudera_manager_metastore_canary_test_catalog_hive_HIVEMETASTORE_ac9f056b57b5af07c85fc6a689cb47ce' to trash at: hdfs://ip-172-31-13-77.ap-south-1.compute.internal:8020/user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/cloudera_manager_metastore_canary_test_catalog_hive_HIVEMETASTORE_ac9f056b57b5af07c85fc6a689cb47ce1727441473987 2024-09-27 12:51:13,993 WARN org.apache.hadoop.hive.metastore.utils.FileUtils: [pool-5-thread-62]: File does not exist: hdfs://ip-172-31-13-77.ap-south-1.compute.internal:8020/user/hue/.cloudera_manager_hive_metastore_canary/cloudera_manager_metastore_canary_test_catalog_hive_HIVEMETASTORE_ac9f056b57b5af07c85fc6a689cb47ce; Force to delete it. 2024-09-27 12:51:13,994 ERROR org.apache.hadoop.hive.metastore.utils.FileUtils: [pool-5-thread-62]: Failed to delete hdfs://ip-172-31-13-77.ap-south-1.compute.internal:8020/user/hue/.cloudera_manager_hive_metastore_canary/cloudera_manager_metastore_canary_test_catalog_hive_HIVEMETASTORE_ac9f056b57b5af07c85fc6a689cb47ce 2024-09-27 12:51:13,995 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-5-thread-62]: 62: Cleaning up thread local RawStore... 2024-09-27 12:51:13,995 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-62]: ugi=hue/ip-172-31-13-77.ap-south-1.compute.internal@AP-SOUTH-1.COMPUTE.AMAZONAWS.COM ip=172.31.13.77 cmd=Cleaning up thread local RawStore... 2024-09-27 12:51:13,995 INFO org.apache.hadoop.hive.metastore.ObjectStore: [pool-5-thread-62]: RawStore: org.apache.hadoop.hive.metastore.ObjectStore@b6195da, with PersistenceManager: org.datanucleus.api.jdo.JDOPersistenceManager@142daedd will be shutdown 2024-09-27 12:51:13,995 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-5-thread-62]: 62: Done cleaning up thread local RawStore 2024-09-27 12:51:13,995 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-62]: ugi=hue/ip-172-31-13-77.ap-south-1.compute.internal@AP-SOUTH-1.COMPUTE.AMAZONAWS.COM ip=172.31.13.77 cmd=Done cleaning up thread local RawStore 2024-09-27 12:51:13,995 INFO org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-5-thread-62]: 62: Done cleaning up thread local RawStore 2024-09-27 12:51:13,995 INFO org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-62]: ugi=hive/ip-172-31-13-77.ap-south-1.compute.internal@AP-SOUTH-1.COMPUTE.AMAZONAWS.COM ip=172.31.13.77 cmd=Done cleaning up thread local RawStore Output of the klist command: root@ip-172-31-13-77:/var/log/hive# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: hive/ec2-52-66-58-15.ap-south-1.compute.amazonaws.com@AP-SOUTH-1.COMPUTE.AMAZONAWS.COM Valid starting Expires Service principal 09/27/24 06:50:48 09/27/24 16:50:48 krbtgt/AP-SOUTH-1.COMPUTE.AMAZONAWS.COM@AP-SOUTH-1.COMPUTE.AMAZONAWS.COM renew until 10/04/24 06:50:39 Moreover;Also check the logs of the hiveserver also: root@ip-172-31-13-77:/var/log/hive# tail -f hadoop-cmf-hive-HIVESERVER2-ip-172-31-13-77.ap-south-1.compute.internal.log.out at java.io.BufferedInputStream.read(BufferedInputStream.java:345) ~[?:1.8.0_232] at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551] at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551] at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551] at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551] at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ~[hive-exec-3.1.3000.7.1.7.0-551.jar:3.1.3000.7.1.7.0-551] ... 10 more 2024-09-27 12:31:54,081 INFO org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry: [HiveMaterializedViewsRegistry-0]: Materialized views registry has been refreshed Please let me know if any things needs to be check and configured Thanks Ankit
... View more
09-24-2024
06:52 PM
1 Kudo
I do not see any orange alerts but still the audit logs for ranger are being written into spool directory
... View more
08-29-2024
06:58 PM
2 Kudos
Hbase master was in intializing state , since there were many server break down processes runing due to hbase master failure After clearing the /hbase znode from zkcli the issue was resolved
... View more
08-28-2024
04:30 AM
I am unable to locate /hbase-secure znode , which one should i delete have the same issue , I am just having /hbase znode
... View more
08-19-2024
07:34 AM
1 Kudo
@hadoopranger I am circling back on this. Did you try the steps shared in my previous reply? If yes, let us know how did it go. If you find my reply helpful, You may mark it as the accepted solution. You can also say thanks by clicking on the thumbs up button. V
... View more
08-08-2024
05:10 AM
1 Kudo
hi @hadoopranger, unfortunately there is no solution for this specific error, I had to use another connection method using another jdbc controller.
... View more
08-02-2024
03:04 AM
1 Kudo
Hello @hadoopranger , If you are using Cloudera Manager to manage your services, you can add new services using this document https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/managing-clusters/topics/cm-adding-a-service.html Also, you can use CM API to add the services to the cluster. https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/configuring-clusters/topics/cm-api-cluster-automation.html Hope this helps! Cheers!
... View more
06-01-2024
12:49 AM
1 Kudo
There has been a update in external db hive is using and new version instance is running on a new port this config change was made when whole cluster was stopped @smruti Can you please help with how is the data loaded from hive metastore to postgres sql is it through pyscopg and where is it stored in database server cause we need to get the missing data from older version of db but cannot locate where the source of this corrupted data is on db server
... View more