Member since
09-29-2015
4611
Posts
21
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
732 | 08-11-2021 05:29 AM | |
585 | 07-07-2021 01:04 AM | |
343 | 07-06-2021 03:24 AM | |
975 | 06-07-2021 11:12 PM | |
313 | 06-02-2021 04:06 AM |
09-21-2020
05:20 AM
Hello @Mondi , thank you for your interest on trialing CDP. Based on this documentation: "You can try the CDP Private Cloud Base Edition of Cloudera Data Platform for 60 days without obtaining a license key file. To download CDP Private Cloud Base without obtaining a license key file, visit the CDP Private Cloud Base Trial Download page, click Try Now , and follow the download instructions." Hope this helps! Best regards: Ferenc
... View more
09-21-2020
05:17 AM
Hello @aryan_dear , thank you for reporting your observations on Impala UNION ALL returning wrong results up to CDH6. The issue you are facing with is most likely: https://issues.apache.org/jira/browse/IMPALA-7957 The fix for this issue is implemented from Impala3.3., for example in CDP. Please let us know if you need more information on this topic! Kind regards: Ferenc
... View more
09-17-2020
05:38 AM
Hello @Yuriy_but , it is good to hear you found the solution and it works for you now! Best regards: Ferenc
... View more
09-17-2020
04:21 AM
Hello @Yuriy_but , thank you for the screenshots. Based on the log I would intuitively expect that if the agent was able to read the new configs, the "Agent config" section would reflect your TLS configuration however, it does not show the verify cert file neither the enabled TLS setting. I guess you've tried to restart the agent already. Would you mind attempting to hard restart the agent to see if it transitioned into a bad state, so the restart did not work? " Warning: The hard_stop and hard_restart commands kill all running managed service processes on the host(s) where the command is run." Please let us know if the agent is able to read the updated configurations after a hard restart. Thank you: Ferenc
... View more
09-17-2020
03:42 AM
Hello @Ananya_Misra , I was researching further your issue. Found a case in which the user did the below workaround after hitting the same exception: "unregister the version, register again the version to fix the issue" Also found AMBARI-23058 jira that seems to be related and it described: After cleaning up with 'yum-complete-transaction --cleanup-only' still Ambari could not able to proceed further with same error. When user tried to do yum install it was going through successfully. Finally had to delete /var/lib/yum/transaction* manually for ambari to proceed further with installation. These transactions files were very old and nothing to do with any latest installation but still ambari does not proceed further. You can check with the below command if you have old transaction files on your node: ls -lart /var/lib/yum/transaction* Let us know, please if it helped you to overcome the obstackle! Kind regards: Ferenc
... View more
09-17-2020
02:42 AM
Hello @iEason8 , you've got a good point. The main page for the doc (which I've cited earlier) says you need authentication for API calls for CM6 however, the section you've found says explicitly it does not require authentication. So I've done some research in our records and found that the desired behaviour for CM6 is to enforce authentication for API calls (internal jira reference OPSAPS-44459) and that there is a bug in our doc (internal jira reference DOCS-4659) that explains "Doc wrongly suggests that clientConfig API call does not require any authentication" and this jira is unresolved at this moment. Sorry for the inconvenience caused. You need to get authenticated to be able to use the API in CM6. Kind regards: Ferenc
... View more
09-17-2020
01:47 AM
Hi @iEason8 , I've managed to get the config and the clientConfig too using curl. The cluster name in my case was "Cluster 1", the service name was "KAFKA-1", so had to URL encode the space. I've highlighted the parts in bold you will need to change. Make sure the user you use have the necessary privileges to get the configs: " The Cloudera Manager API uses HTTP basic access authentication . It accepts the same user credentials as the web interface. Different users may have different levels of access, as defined by their roles. (See the user management API calls for more.) With every authenticated request, the server returns a session cookie, which can be subsequently used for authentication." curl -X GET -H "Content-Type:application/json" -u [username]:[password] \ -d '{ "items": [ {"name": "enable_config_alerts", "value": "true"} ] }' \ 'http://[cm_address]:7180/api/v33/clusters/Cluster%201/services/KAFKA-1/clientConfig' Please let me know if it works for you too! Thank you: Ferenc
... View more
09-16-2020
01:23 AM
Hello @Yuriy_but , thank you for this information. Did you enable "Use TLS Encryption for Agents" on CM, please? Did you restart both CM and the agent on the host after making these changes? To verify if the configuration change worked the documentation describes: " In the Cloudera Manager Admin Console, go to Hosts > All Hosts . If you see successful heartbeats reported in the Last Heartbeat column after restarting the agents, TLS encryption is working properly." Kind regards: Ferenc
... View more
09-16-2020
12:36 AM
Hello @Ananya_Misra , thank you for reaching out to the Community! Please check on the host where the installation fails if there is any package left from the old version. Please remove all packages that are for the 2.3.4 version and attempt the installation on this node again. Hope this helps! Kind regards: Ferenc
... View more
09-15-2020
03:39 AM
Hello @iEason8 , thank you for reaching out to Community! I've noticed that you are using v13 API for CM6.3, however CM6.3 uses v33 API. Please see this reference doc for the API. If you adjust your request accordingly to the new API, do you still have the issue? Kind regards: Ferenc
... View more
09-14-2020
08:43 AM
Hello @Yuriy_but , thank you for reaching out to the Community. What is the CDH version you are using, please? For CDH6.3 please find here the related documentation on how to manually configure TLS Encryption for CM. Did you follow the steps from the documentation, please? Thank you: Ferenc
... View more
09-14-2020
08:30 AM
Hello @Anto , thank you for reporting you have issues replacing the Spark2 version on CDH6. Unfortunately it is meant to be like this. Please see this thread that explains and this documentation on the compatibility matrix. In short: Spark2 comes bundled with CDH6, hence you cannot replace it with a different version or install it separately. Hope this helps! Kind regards: Ferenc
... View more
09-10-2020
01:11 AM
Hello @AJM , thank you for reaching out regarding what happens after your subscription license expires. Please kindly specify the exact version you are planning to use to be able to provide you with a more accurate answer. In this thread, we discussed that e.g. in CDH6 with a trial license you can install Cloudera Express and it has a 100 nodes limitation. Once any license expires, the documentation describes here what happens: Cloudera Enterprise Cloudera Enterprise Trial - Enterprise features are disabled. Cloudera Enterprise - Most enterprise features such as Navigator, reports, Auto-TLS, and fine-grained permissions are disabled. Key Trustee KMS and the Key Trustee server will continue to function on the cluster, but you cannot change any configurations or add any services. Navigator will not continue to collect audits, but existing audits will remain on disk in the Cloudera Manager audit table. On license expiration, the license expiration banner displays which enterprise features have been disabled. With our latest product, CDP, you can find the licensing information here. Please let us know if you need more information on this topic. Thank you: Ferenc
... View more
09-04-2020
03:37 AM
Hello @Atradius , thank you for reaching out to the Community with your issue of having both NameNodes down. Do you see in your NN log entries like JvmPauseMonitor saying "Detected pause in JVM or host machine" and a value larger than 1000ms, please? It can be an indication that your service is running out of heap. If it is the NameNode, the short-term solution is to increase the heap and restart the service. A long term solution is to identify why did you run out of heap? E.g. do you face with small files issue? Please read article [1] about how to tackle this. Losing quorum might be caused by ZK service issue, when the ZK is not in quorum. Please check the ZK logs as well. Please let us know if you need more input to progress with your investigation. Best regards: Ferenc [1] https://blog.cloudera.com/small-files-big-foils-addressing-the-associated-metadata-and-application-challenges/
... View more
08-31-2020
01:05 AM
Hi @Aswinnxp , replied to your enquiry in this thread. Best regards: Ferenc
... View more
08-31-2020
01:03 AM
Hello @Aswinnxp , thank you for letting us know that after registration to the Support Portal you have difficulties raising a case. Do you have a valid Cloudera Subscription, please? It will be required to be able to raise a Support Case. Once you have Cloudera Subscription, the steps to take to register to the portal are: 1. Register at: https://sso.cloudera.com/register.html You will be sent an email with a validation link. 2. Please click this link to complete your registration. 3. Set your password 4. Login at https://sso.cloudera.com . I understand that you've done these steps. Until you will be reached out following your registration, we can do one more thing: Do you have a colleague that has working portal access and can raise a case, please? In this case please raise a case to get further assistance on your access. Thank you: Ferenc
... View more
08-27-2020
05:37 AM
Hello @tresk , thank you for reaching out to the Community. Wondering if this is the doc you were looking for: "The JDBC connection string for connecting to a remote Hive client requires a host, port, and Hive database name. You can optionally specify a transport type and authentication." jdbc:hive2://<host>:<port>/<dbName>;<sessionConfs>?<hiveConfs>#<hiveVars> Please let us know! Thank you: Ferenc
... View more
08-27-2020
05:08 AM
Hello @Suyog1981 , thank you for reaching out to Community. I understand that your issue is: After upgrading from CDH5.12 to CDH6.3.3. MR2 job that is connecting to HBase is failing and it seems to be that the runtime is still pointing to CDH5.12. Can you please check if any of the links under /etc/alternatives or /var/lib/alternatives is still pointing to CDH5.12 paths on the node where the container of the MR job is failing? E.g. use: grep CDH-5.12 * | awk -F ':' '{print $1}' Thank you: Ferenc
... View more
08-27-2020
02:05 AM
Hello @Mohsenhs , thank you for showing interest in the CCA159. Based on the description of the exam: "CCA159 is a hands-on, practical exam using Cloudera technologies. Each user is given their own CDH6 (currently 6.1.1) cluster pre-loaded with Impala, HiveServer1 and HiveServer2." You can download the required Cloudera product following the instructions from the documentation: "A 60-day trial can be enabled to provide access to the full set of Cloudera Enterprise Cloudera Enterprise features." Please let us know if it answers your inquiry! Thank you: Ferenc
... View more
08-26-2020
02:17 AM
Hellio @AmroSaleh , thank you for reaching out on Community and raising your enquiry on Sentry-HDFS. Have you seen the "Authorization with Apache Sentry" documentation, please? For HDFS-Sentry synchronization to work, you must use the Sentry service, not policy file authorization. See Synchronizing HDFS ACLs and Sentry Permissions, for more details. Let us know if you went through these docs and you still need any additional information. Thank you: Ferenc
... View more
08-26-2020
01:22 AM
Hello @KSKR , thank you for raising the question on "how to fetch the CPU utilization for a Spark job programmatically". One way to do this is via the Spark REST API. You should consider if you need the "live data" or you are looking for analysis once the application finished running. While the application is running, you can consider to connect to the driver and fetch the live data. Once the application finished running, you can consider parse the JSON files (the event log files) for the CPU time or use the Spark REST API and let the Spark History Server serve you with the data. What is your exact requirement? What would you like to achieve? Thank you: Ferenc
... View more
08-26-2020
01:14 AM
Hello @KSKR , thank you for your inquiry about how to profile Spark workloads. Apart from the Spark Web UI (which parses the event log file), you can use the same event log file along with the Spark application log (container logs) to compare the same or similar applications run with each other. It can reveal tuning opportunities, like: "my application was running much faster a month ago, what changed?" Cloudera has a product that does it for you along with other goodies, and it is called Workflow XM. Please let me know if you need more information on this topic. Thank you: Ferenc
... View more
08-25-2020
07:24 AM
Hello @xinfengz , thank you for your interest in the installation of CM6.3.3. Based on the documentation under "Managing Licences", you have the below options: When you install Cloudera Manager, you can select among the following editions: Cloudera Express (no license required), a 60-day Cloudera Enterprise Cloudera Enterprise trial license, or Cloudera Enterprise (which requires a license). To obtain a Cloudera Enterprise license, fill in this form or call 866-843-7207. Please let us know if it addresses your inquiry. Thank you: Ferenc
... View more
08-25-2020
05:52 AM
Hello @Love-Nifi and @vchhipa , Thank you for posting your inquiry about timeouts. Without the full log, I can provide only some "if you see this, do that" kind of instructions. If you see an ERROR message with: org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow, then follow the below is the steps to resolve the issue: 1. Go to NIFi UI > Global Menu > Cluster 2. Check which host is the coordinator and login to that host on the shell. 3. Go to flow.xml.gz file location. [default location is /var/lib/nifi/conf/] 4. Copy flow.xml.gz on the disconnected node and replace the original flow.xml.gz with copied flow.xml.gz file. 5. Check permissions and ownership of newly copied flow.xml.gz file and then restart Nifi on the disconnected node only. If you are suspecting purely timeout issues, please attempt to tweak the below values in nifi.properties and restart the service: - nifi.cluster.node.protocol.threads=50 (Default 10) - nifi.cluster.node.connection.timeout=30 sec (Default 5 sec) - nifi.cluster.node.read.timeout=30 sec (Default 5 sec) Please find below a set of configurations that worth tuning on larger clusters based on https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html The below are some example values for larger clusters (you need to tune it based on your unique setup): nifi.cluster.node.protocol.threads=70 nifi.cluster.node.protocol.max.threads=100 nifi.zookeeper.session.timeout=30 sec nifi.zookeeper.connect.timeout=30 sec nifi.cluster.node.connection.timeout=60 sec nifi.cluster.node.read.timeout=60 sec nifi.ui.autorefresh.interval=900 sec nifi.cluster.protocol.heartbeat.interval=20 sec nifi.components.status.repository.buffer.size=300 nifi.components.status.snapshot.frequency=5 mins nifi.cluster.node.protocol.max.threads=120 nifi.cluster.node.protocol.threads=80 nifi.cluster.node.read.timeout=90 sec nifi.cluster.node.connection.timeout=90 sec nifi.cluster.node.read.timeout=90 sec Please check if you notice any certificate related exception, like: WARN [Clustering Tasks Thread-2] o.apache.nifi.controller.FlowController Failed to send heartbeat due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 'HEARTBEAT' protocol message due to: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate In this case, create a new keystore and truststore and add client auth in the keystore. Best regards: Ferenc
... View more
08-25-2020
01:25 AM
Hello @unkedeuxke , thank you for your enquiry about upgrading Hive 3.1.0 to 3.1.2 on HDP 3.1.4 without upgrading other components. I would personally re-phrase the question: - why is it only supported to upgrade the entire HDP stack instead of the services individually? HDP is a distribution of various components bundled into a product. These components are tested against each other, so the distribution works reliably. Should you upgrade a single component only, you lose these guarantees that your HDP distribution will work. For instance think about API changes, new features or bugs being introduced. In short: you might able to upgrade a specific component however, it is not recommended because it might break things. Hope this helps! Best regards: Ferenc
... View more
08-25-2020
12:45 AM
1 Kudo
Hello @P_Rat98 , thank you for raising the question about the red dot. Please see this thread that might answer your inquiry. In short: it is a " non-breaking space character to the file that displays as a red dot". Please let us know if it helped by pressing the "Accept as Solution" button. Should you need further information, please do not hesitate to reach out to the Community. Best regards: Ferenc
... View more
08-06-2020
11:52 PM
Hello @ameya , thank you for your question about how to upgrade your cluster from HDP 2.6.5 to CDP 7.1.1. This quick start guide details the steps to follow. If you’re a Cloudera Subscription Support customer, we can connect you with your Account team to explore a possible Services engagement for this request. Let us know if you’re interested in this path, we’ll private message you to collect more information. Please let us know if you need any further assistance! Best regards: Ferenc
... View more
08-06-2020
01:45 AM
Hello @emeric , the "kinit: KDC has no support for encryption type while getting initial credentials" usually occurs after configuring encryption types that do not match the ones present in the tgt principal (such as krbtgt/CLOUDERA@CLOUDERA) in the KDC. This can also happen while starting a service where the enctypes in the krbtgt principal do not match those used in service keytab. From an earlier Community post: Please compare the Kerberos server and client configurations and reconfigure krb5.conf on all your nodes to explicitly use the supported encryption type. The documentation describes: " Kerberos client OS-specific packages must be installed on all cluster hosts and client hosts that will authenticate using Kerberos." Wondering if some missing packages might be the issue? Kind regards: Ferenc
... View more
08-05-2020
08:30 AM
Hello @meiravR , thank you for confirming that you ensured your AWS account is configured accordingly to our documentation. Should you have a Cloudera Support Subscription, please file a support case with us to assist you further, as we reached the limit of what can be addressed efficiently via Community. Thank you: Ferenc
... View more
08-05-2020
05:38 AM
Hello @md186036 , The error message you pointed out [1] seems to be a known issue and is looked at by the below internal JIRA ticket: NAV-7272 - NPE in getEpIdentitiesForMissingRelations As per the JIRA ticket: "An NPE is being caused by getEpIdentitiesForMissingRelations() during Spark extraction. The condition that causes it is rare, however, once the condition exists, because of the NPE, it will continue forever. The code is trying to detect ep2Ids for linked relations that are missing so they can be added. However, the code fails to check for null in the case that this is true." The fix is not available yet in any currently released CDH distribution. The fix might be available in CDH6.4.0, 5.16.3, 6.2.2, 6.3.4, 7.1.1, 5.17.0. My understanding is that this can cause no new metadata is produced. Should you have a Cloudera Support Subscription, please kindly file a support ticket with us to assist you further, as there is no workaround identified for this bug. Thank you: Ferenc [1] ERROR SparkPushExtractor
[qtp1810923540-17908]: com.cloudera.nav.pushextractor.spark.SparkPushExtractor Error extracting Spark operation.
java.lang.NullPointerExceptio
... View more