Member since
08-10-2022
185
Posts
23
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1279 | 03-26-2024 05:06 AM | |
1384 | 03-21-2024 03:01 AM | |
1778 | 08-08-2023 11:33 PM | |
1886 | 07-17-2023 10:26 PM | |
1051 | 07-17-2023 02:26 AM |
06-14-2023
03:28 AM
@MohammedMustaq make sure you are using the latest ODBC driver and it's set up as per the article: https://docs.cloudera.com/documentation/other/connectors/hive-odbc/2-6-16/Cloudera-ODBC-Driver-for-Apache-Hive-Install-Guide.pdf Can you please share your configuration? Cheers! Tarun
... View more
06-01-2023
10:24 PM
Hi @hanumanth , Could you please elaborate on the issue? Are you getting any errors? If yes, share the error. Regards, Tarun
... View more
05-08-2023
02:27 AM
Hello @jijy, could you please share your create table statement and some sample data? Regards
... View more
04-17-2023
09:33 PM
Hi @Abdul_ , It looks like the issue is that the data contains a newline character (\n) within a field value, which causes the record to be split into two rows, causing the problem. Can you modify the data to remove "\n" from the sample data? In that case, the create statement that you are using is correct. However, if data modification is impossible, you may use "LazySimpleSerDe". However, it may not be as performant as the OpenCSVSerde for large datasets. Hope this helps, Tarun Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs-up button.
... View more
04-14-2023
02:36 AM
1 Kudo
@ygbaek , This is a known issue and is resolved in releases 7.1.8 and 7.2.16.0. What is the CDP version you are using? It looks like the use_start_tls is set to true by default. https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini#L477 Hope this helps, Tarun Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs-up button.
... View more
04-14-2023
02:28 AM
@salman1214 There is no direct way, however, you can get the details from the backend Hue database by accessing the below tables: auth_user, auth_user_groups, useradmin_grouppermission, useradmin_huepermission. Hope this helps, Tarun Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs-up button.
... View more
03-09-2023
09:01 PM
Hello @Tushar17, Have you checked the logs /var/log/cloudera-manager-installer/8.start-cloudera-scm-server.log ? Can you please share any stack trace you see in the logs?
... View more
01-18-2023
08:14 PM
Hello @snm1523, The exit code 50 refers to the LDAP error code, which translates to 'insufficientAccessRights'. Cloudera Manager Server must have the correct Kerberos principal configured. Specifically, Cloudera Manager Server must have a Kerberos principal that has privileges to create other accounts in Active Directory. Make sure that the Cloudera Manager Server account does have the ability to create/delete accounts in Active Directory and that it does belong to a Global group. Ref: https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/security-kerberos-authentication/topics/cm-security-kerberos-enabling-step3-cm-principal.html Hope this helps, Tarun Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs-up button.
... View more
01-15-2023
11:36 PM
Hello @prakodi, For CDH 6.3, you can review this article https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_bdr_hive_replication.html Hope this helps, Tarun Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs-up button.
... View more
01-10-2023
10:46 PM
Hello @baabdullah , The error indicates that your DataNode is down. Could you please confirm if that's the scenario? Error: could only be written to 0 of the 1 minReplication nodes. There are 0 datanode(s) running and 0 node(s) are excluded in this operation. Could you please check the data node logs to identify the exact issue? Maybe you need to check the heap size (one of the many possible reasons). Hope this helps, Tarun Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs-up button.
... View more
- « Previous
- Next »