Member since
09-29-2015
5226
Posts
22
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1394 | 07-13-2022 07:05 AM | |
3585 | 08-11-2021 05:29 AM | |
2328 | 07-07-2021 01:04 AM | |
1575 | 07-06-2021 03:24 AM | |
3546 | 06-07-2021 11:12 PM |
09-15-2020
03:39 AM
Hello @iEason8 , thank you for reaching out to Community! I've noticed that you are using v13 API for CM6.3, however CM6.3 uses v33 API. Please see this reference doc for the API. If you adjust your request accordingly to the new API, do you still have the issue? Kind regards: Ferenc
... View more
09-14-2020
08:43 AM
Hello @Yuriy_but , thank you for reaching out to the Community. What is the CDH version you are using, please? For CDH6.3 please find here the related documentation on how to manually configure TLS Encryption for CM. Did you follow the steps from the documentation, please? Thank you: Ferenc
... View more
09-10-2020
01:11 AM
Hello @AJM , thank you for reaching out regarding what happens after your subscription license expires. Please kindly specify the exact version you are planning to use to be able to provide you with a more accurate answer. In this thread, we discussed that e.g. in CDH6 with a trial license you can install Cloudera Express and it has a 100 nodes limitation. Once any license expires, the documentation describes here what happens: Cloudera Enterprise Cloudera Enterprise Trial - Enterprise features are disabled. Cloudera Enterprise - Most enterprise features such as Navigator, reports, Auto-TLS, and fine-grained permissions are disabled. Key Trustee KMS and the Key Trustee server will continue to function on the cluster, but you cannot change any configurations or add any services. Navigator will not continue to collect audits, but existing audits will remain on disk in the Cloudera Manager audit table. On license expiration, the license expiration banner displays which enterprise features have been disabled. With our latest product, CDP, you can find the licensing information here. Please let us know if you need more information on this topic. Thank you: Ferenc
... View more
09-04-2020
03:37 AM
Hello @Atradius , thank you for reaching out to the Community with your issue of having both NameNodes down. Do you see in your NN log entries like JvmPauseMonitor saying "Detected pause in JVM or host machine" and a value larger than 1000ms, please? It can be an indication that your service is running out of heap. If it is the NameNode, the short-term solution is to increase the heap and restart the service. A long term solution is to identify why did you run out of heap? E.g. do you face with small files issue? Please read article [1] about how to tackle this. Losing quorum might be caused by ZK service issue, when the ZK is not in quorum. Please check the ZK logs as well. Please let us know if you need more input to progress with your investigation. Best regards: Ferenc [1] https://blog.cloudera.com/small-files-big-foils-addressing-the-associated-metadata-and-application-challenges/
... View more
08-31-2020
01:03 AM
Hello @Aswinnxp , thank you for letting us know that after registration to the Support Portal you have difficulties raising a case. Do you have a valid Cloudera Subscription, please? It will be required to be able to raise a Support Case. Once you have Cloudera Subscription, the steps to take to register to the portal are: 1. Register at: https://sso.cloudera.com/register.html You will be sent an email with a validation link. 2. Please click this link to complete your registration. 3. Set your password 4. Login at https://sso.cloudera.com. I understand that you've done these steps. Until you will be reached out following your registration, we can do one more thing: Do you have a colleague that has working portal access and can raise a case, please? In this case please raise a case to get further assistance on your access. Thank you: Ferenc
... View more
08-27-2020
05:37 AM
Hello @tresk , thank you for reaching out to the Community. Wondering if this is the doc you were looking for: "The JDBC connection string for connecting to a remote Hive client requires a host, port, and Hive database name. You can optionally specify a transport type and authentication." jdbc:hive2://<host>:<port>/<dbName>;<sessionConfs>?<hiveConfs>#<hiveVars> Please let us know! Thank you: Ferenc
... View more
08-27-2020
05:08 AM
Hello @Suyog1981 , thank you for reaching out to Community. I understand that your issue is: After upgrading from CDH5.12 to CDH6.3.3. MR2 job that is connecting to HBase is failing and it seems to be that the runtime is still pointing to CDH5.12. Can you please check if any of the links under /etc/alternatives or /var/lib/alternatives is still pointing to CDH5.12 paths on the node where the container of the MR job is failing? E.g. use: grep CDH-5.12 * | awk -F ':' '{print $1}' Thank you: Ferenc
... View more
08-27-2020
02:05 AM
Hello @Mohsenhs , thank you for showing interest in the CCA159. Based on the description of the exam: "CCA159 is a hands-on, practical exam using Cloudera technologies. Each user is given their own CDH6 (currently 6.1.1) cluster pre-loaded with Impala, HiveServer1 and HiveServer2." You can download the required Cloudera product following the instructions from the documentation: "A 60-day trial can be enabled to provide access to the full set of Cloudera Enterprise Cloudera Enterprise features." Please let us know if it answers your inquiry! Thank you: Ferenc
... View more
08-26-2020
02:17 AM
Hellio @AmroSaleh , thank you for reaching out on Community and raising your enquiry on Sentry-HDFS. Have you seen the "Authorization with Apache Sentry" documentation, please? For HDFS-Sentry synchronization to work, you must use the Sentry service, not policy file authorization. See Synchronizing HDFS ACLs and Sentry Permissions, for more details. Let us know if you went through these docs and you still need any additional information. Thank you: Ferenc
... View more
08-26-2020
01:22 AM
Hello @KSKR , thank you for raising the question on "how to fetch the CPU utilization for a Spark job programmatically". One way to do this is via the Spark REST API. You should consider if you need the "live data" or you are looking for analysis once the application finished running. While the application is running, you can consider to connect to the driver and fetch the live data. Once the application finished running, you can consider parse the JSON files (the event log files) for the CPU time or use the Spark REST API and let the Spark History Server serve you with the data. What is your exact requirement? What would you like to achieve? Thank you: Ferenc
... View more