Member since
09-29-2015
5226
Posts
22
Kudos Received
34
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1398 | 07-13-2022 07:05 AM | |
3595 | 08-11-2021 05:29 AM | |
2330 | 07-07-2021 01:04 AM | |
1577 | 07-06-2021 03:24 AM | |
3553 | 06-07-2021 11:12 PM |
08-25-2020
07:24 AM
Hello @xinfengz , thank you for your interest in the installation of CM6.3.3. Based on the documentation under "Managing Licences", you have the below options: When you install Cloudera Manager, you can select among the following editions: Cloudera Express (no license required), a 60-day Cloudera Enterprise Cloudera Enterprise trial license, or Cloudera Enterprise (which requires a license). To obtain a Cloudera Enterprise license, fill in this form or call 866-843-7207. Please let us know if it addresses your inquiry. Thank you: Ferenc
... View more
08-25-2020
05:52 AM
Hello @Love-Nifi and @vchhipa , Thank you for posting your inquiry about timeouts. Without the full log, I can provide only some "if you see this, do that" kind of instructions. If you see an ERROR message with: org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow, then follow the below is the steps to resolve the issue: 1. Go to NIFi UI > Global Menu > Cluster 2. Check which host is the coordinator and login to that host on the shell. 3. Go to flow.xml.gz file location. [default location is /var/lib/nifi/conf/] 4. Copy flow.xml.gz on the disconnected node and replace the original flow.xml.gz with copied flow.xml.gz file. 5. Check permissions and ownership of newly copied flow.xml.gz file and then restart Nifi on the disconnected node only. If you are suspecting purely timeout issues, please attempt to tweak the below values in nifi.properties and restart the service: - nifi.cluster.node.protocol.threads=50 (Default 10) - nifi.cluster.node.connection.timeout=30 sec (Default 5 sec) - nifi.cluster.node.read.timeout=30 sec (Default 5 sec) Please find below a set of configurations that worth tuning on larger clusters based on https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html The below are some example values for larger clusters (you need to tune it based on your unique setup): nifi.cluster.node.protocol.threads=70 nifi.cluster.node.protocol.max.threads=100 nifi.zookeeper.session.timeout=30 sec nifi.zookeeper.connect.timeout=30 sec nifi.cluster.node.connection.timeout=60 sec nifi.cluster.node.read.timeout=60 sec nifi.ui.autorefresh.interval=900 sec nifi.cluster.protocol.heartbeat.interval=20 sec nifi.components.status.repository.buffer.size=300 nifi.components.status.snapshot.frequency=5 mins nifi.cluster.node.protocol.max.threads=120 nifi.cluster.node.protocol.threads=80 nifi.cluster.node.read.timeout=90 sec nifi.cluster.node.connection.timeout=90 sec nifi.cluster.node.read.timeout=90 sec Please check if you notice any certificate related exception, like: WARN [Clustering Tasks Thread-2] o.apache.nifi.controller.FlowController Failed to send heartbeat due to: org.apache.nifi.cluster.protocol.ProtocolException: Failed marshalling 'HEARTBEAT' protocol message due to: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate In this case, create a new keystore and truststore and add client auth in the keystore. Best regards: Ferenc
... View more
08-25-2020
01:25 AM
Hello @unkedeuxke , thank you for your enquiry about upgrading Hive 3.1.0 to 3.1.2 on HDP 3.1.4 without upgrading other components. I would personally re-phrase the question: - why is it only supported to upgrade the entire HDP stack instead of the services individually? HDP is a distribution of various components bundled into a product. These components are tested against each other, so the distribution works reliably. Should you upgrade a single component only, you lose these guarantees that your HDP distribution will work. For instance think about API changes, new features or bugs being introduced. In short: you might able to upgrade a specific component however, it is not recommended because it might break things. Hope this helps! Best regards: Ferenc
... View more
08-25-2020
12:45 AM
1 Kudo
Hello @P_Rat98 , thank you for raising the question about the red dot. Please see this thread that might answer your inquiry. In short: it is a "non-breaking space character to the file that displays as a red dot". Please let us know if it helped by pressing the "Accept as Solution" button. Should you need further information, please do not hesitate to reach out to the Community. Best regards: Ferenc
... View more
08-06-2020
11:52 PM
Hello @ameya , thank you for your question about how to upgrade your cluster from HDP 2.6.5 to CDP 7.1.1. This quick start guide details the steps to follow. If you’re a Cloudera Subscription Support customer, we can connect you with your Account team to explore a possible Services engagement for this request. Let us know if you’re interested in this path, we’ll private message you to collect more information. Please let us know if you need any further assistance! Best regards: Ferenc
... View more
08-06-2020
01:45 AM
1 Kudo
Hello @emeric , the "kinit: KDC has no support for encryption type while getting initial credentials" usually occurs after configuring encryption types that do not match the ones present in the tgt principal (such as krbtgt/CLOUDERA@CLOUDERA) in the KDC. This can also happen while starting a service where the enctypes in the krbtgt principal do not match those used in service keytab. From an earlier Community post: Please compare the Kerberos server and client configurations and reconfigure krb5.conf on all your nodes to explicitly use the supported encryption type. The documentation describes: "Kerberos client OS-specific packages must be installed on all cluster hosts and client hosts that will authenticate using Kerberos." Wondering if some missing packages might be the issue? Kind regards: Ferenc
... View more
08-05-2020
08:30 AM
Hello @meiravR , thank you for confirming that you ensured your AWS account is configured accordingly to our documentation. Should you have a Cloudera Support Subscription, please file a support case with us to assist you further, as we reached the limit of what can be addressed efficiently via Community. Thank you: Ferenc
... View more
08-05-2020
05:38 AM
Hello @md186036 , The error message you pointed out [1] seems to be a known issue and is looked at by the below internal JIRA ticket: NAV-7272 - NPE in getEpIdentitiesForMissingRelations As per the JIRA ticket: "An NPE is being caused by getEpIdentitiesForMissingRelations() during Spark extraction. The condition that causes it is rare, however, once the condition exists, because of the NPE, it will continue forever. The code is trying to detect ep2Ids for linked relations that are missing so they can be added. However, the code fails to check for null in the case that this is true." The fix is not available yet in any currently released CDH distribution. The fix might be available in CDH6.4.0, 5.16.3, 6.2.2, 6.3.4, 7.1.1, 5.17.0. My understanding is that this can cause no new metadata is produced. Should you have a Cloudera Support Subscription, please kindly file a support ticket with us to assist you further, as there is no workaround identified for this bug. Thank you: Ferenc [1] ERROR SparkPushExtractor
[qtp1810923540-17908]: com.cloudera.nav.pushextractor.spark.SparkPushExtractor Error extracting Spark operation.
java.lang.NullPointerExceptio
... View more
08-05-2020
03:45 AM
Hello @meiravR , thank you for your feedback. I have checked the credentials page and I can see a tooltip on the UI that explains what is the "Enable Permission Verification" for. Although I cannot see the "question mark tooltip" in your screenshot. Is your AWS environment satisfies the prerequisites detailed here, please? Thank you: Ferenc
... View more
08-04-2020
08:03 AM
Hello @meiravR , I have discussed your question internally and it seems to be that the regions are not being listed when the credential you’re using doesn’t have permissions to list the regions. When you are creating a credential, there’s a checkbox to verify permissions. When enabled it should check the permissions prior to creation and throw errors is perms are missing from the CB policy. The 504 Gateway Timeout is displayed e.g. when the creation of an environment fails. We are currently working on returning a more meaningful exception (internally tracked DWX-4799). Kind regards: Ferenc
... View more