Member since
11-01-2019
146
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
127 | 09-26-2025 08:06 AM | |
1481 | 03-04-2025 08:16 AM | |
2527 | 03-23-2023 03:30 PM | |
2952 | 02-01-2023 12:44 PM |
10-13-2025
10:41 AM
Hi @ishashrestha , Yes, the user needs to have permission to write to that directory. You can test to execute with another user that already have that permissions. Let me know if works. Best Regards
... View more
09-26-2025
08:06 AM
Hi @champa , If you are connecting to a Public Cloud Virtual Warehouse you should use port 443 and the HiveServer2 endpoint. Check this article: https://lighthouse.cloudera.com/s/article/How-to-solve-the-ClouderaDriverSupport-1170-Unexpected-response-received-from-server-ODBC-connection-error-with-Datahub-cluster If you are not using public cloud and you are trying to connect to a on premise cluster, there 3 common causes for this: 1. SSL/TLS Configuration Mismatch (Most Likely) The error explicitly mentions SSL, making this the primary area to check. Knox Requires HTTPS: Knox Gateway typically runs on HTTPS (port 8443). Your client connection string must be configured for SSL/TLS. Action: Ensure your JDBC/ODBC connection string includes the necessary parameters: ssl=true (or equivalent) and, if a self-signed certificate is used, potentially a trustedcerts or trustStore path. 2. Missing Certificate: The client often needs to trust the Knox certificate. Action: Ensure the Knox public certificate is correctly imported into your client machine's Java TrustStore or configured in the ODBC/JDBC driver settings. Action 1 (Check Hive): Verify the Hive configuration in your cluster manager (Cloudera Manager or Ambari). The following property should be set: hive.server2.transport.mode = http hive.server2.thrift.http.path = /cliservice (This is the default path) Action 2 (Check Connection String): Ensure your connection URL is using HTTP transport and specifies the path: Example URL: jdbc:hive2://<knox_host>:8443/;ssl=true;transportMode=http;httpPath=gateway/default/hive 3. Incorrect Host, Port, or Path (Knox Topology) The connection string must correctly route through the Knox topology. Parameter What to Check Host/Port The Knox Gateway hostname and port (usually 8443), not the HiveServer2 hostname and port. Gateway Path The path defined in your Knox topology. It is typically /gateway/<topology_name>/hive (where <topology_name> is usually default). Knox Topology On the Knox server, verify that the Hive service role is correctly defined in the active hive.xml or default.xml topology file, pointing to the correct HS2 host and port. Let me know if this helps. Best Regards
... View more
09-26-2025
07:45 AM
Hi @ishashrestha , Well, another thing is that the path user/myname/.cm/distcp-staging/... suggests this DistCp job was initiated or managed by Cloudera Manager (CM). This process uses a separate staging directory, but it still relies on the NodeManager's local directories for intermediate sorting, which is where the error is occurring (SequenceFile$Sorter.sort). Confirm that the user myname is correctly mapped and has the necessary permissions across the cluster. While you dismissed permissions, an intermittent Kerberos ticket issue or a transient user mapping problem on one specific node could cause this weekly failure. The next steps should focus on reviewing the NodeManager health and logs for the specific nodes that failed, checking the status of the local scratch directories on those nodes, and coordinating the failure time with any scheduled system maintenance. Best Regards
... View more
09-26-2025
07:31 AM
Hi @ishashrestha , Since the issue just happen intermittently is most likely that one of the workers nodes have a local disk issue. Because the job will be executed as a MapReduce job, and Yarn creates their containers with scratch dirs locally, probably one of the nodes have this problem. Please check the yarn.nodemanager.local-dirs and mapreduce.cluster.local.dir to know the location of the scratch-dirs, then confirm if each worker node have enough disk space or the correct permissions. Let me know if this is the case. Best Regards
... View more
09-23-2025
09:40 AM
Hi @Jaguar , This issue seems to be related with Knox. Service Mismatch: The URL in the error (.../gateway/dt/knoxtoken/api/v1/token) suggests a conflict between the Hive Metastore and the Knox Token Service. This often happens after a cluster upgrade where a new token service is implemented, but the clients (in this case, the Metastore) are still configured for the old one. Solution: Check the Knox configuration in your cluster management tool (e.g., Cloudera Manager). Verify that the HiveServer2 and Hive Metastore services are using the correct Knox topology and that the token service settings are correctly configured to match the Knox server. Let me know if this helps.
... View more
07-01-2025
05:54 PM
Hi @Hadoop16 , This stack error usually happens when you have an inconsistency on the jdk versions. Try to check different versions you have in HDFS and Hive. You can also try to export your java_home. Reference: https://community.cloudera.com/t5/Internal/ERROR-quot-Failed-on-local-exception-java-io-IOException-org/ta-p/332526
... View more
07-01-2025
05:44 PM
Hi @AdrianMSA , could you please follow the below steps and check if works? 1. Verify if a Hue Load Balancer (LB) is configured for the Hue service in Cloudera Manager. If not present, add a Hue LB instance.
2. Apply the hue_load_balancer_safety_valve configuration with the following parameter to the Hue LB instance to ensure correct proxy header handling:
3. SetEnv proxy-sendcl 1
4. Reinstall the Hue Load Balancer instance to apply the new configuration. Restart both Hue and Knox services to ensure the new settings are picked up and active.
5. Verify if the errors disappear and Hive databases are listed correctly in the Hue UI.
... View more
07-01-2025
05:23 PM
Hi @Jackallboy , Could you please provide more details from your cluster? - Version - Which client are you connecting from? - Share entire stack error Best, Cristian
... View more
04-10-2025
05:16 PM
1 Kudo
Hi @JediBrooker , The query that you could use should be something like this: WITH org_hierarchy_anchor AS (
SELECT
sup.organisation_id AS sup_id,
sup.organisation_name AS sup_name,
sub.organisation_id AS sub_id,
sub.organisation_name AS sub_name,
CAST(sub.organisation_name AS STRING) AS hierarchy_path
FROM organisation sup
JOIN relationship r ON sup.organisation_id = r.relationship_id
JOIN organisation sub ON r.relationship_orgid = sub.organisation_id
),
org_hierarchy_recursive AS (
SELECT
sup_id,
sup_name,
sub_id,
sub_name,
hierarchy_path
FROM org_hierarchy_anchor
UNION ALL
SELECT
oh.sup_id,
oh.sup_name,
s.organisation_id AS sub_id,
s.organisation_name AS sub_name,
CONCAT(oh.hierarchy_path, '->', s.organisation_name) AS hierarchy_path
FROM org_hierarchy_recursive oh
JOIN relationship r ON oh.sub_id = r.relationship_id
JOIN organisation s ON r.relationship_orgid = s.organisation_id
WHERE INSTR(oh.hierarchy_path, CAST(s.organisation_name AS STRING)) = 0
)
SELECT sup_id, sup_name, sub_id, sub_name, hierarchy_path
FROM org_hierarchy_recursive
ORDER BY sup_id, sub_id; Please, check the column names and table names, I just used what you have provided here. Let me know if works. Best, Cristian Barrueco
... View more
03-04-2025
08:16 AM
Hi @rosejo , the IllegalStateException accompanied by MalformedURLException related to nested JAR URLs indicates a problem with handling specific URL formats, typically within a Java environment. This issue often arises when an application classloader does not support the nested structure of JAR files directly referenced within a JAR URL, which is relatively common in environments using frameworks or libraries that dynamically load classes from JAR files. Could you tell us if you are using the Cloudera JDBC Hive driver connector? You can download the driver from here: https://www.cloudera.com/downloads/connectors/hive/jdbc/2-6-26.html In case you are using, please check if it's the latest version and the below points to fix the issue: Ensure the classloader being used supports the JAR URL format required by the application. Some environments or containers may need specific configuration or a different classloader to handle nested JARs. Check if unpacking the nested JAR files into a single structure helps resolve the issue, as some environments work better with flat directory structures rather than nested JAR files. Verify if there’s an update or configuration option available in the framework or library being used that can handle nested JARs properly. Please let us know if this answer your question? Regards, Cristian Barrueco
... View more