Member since
04-14-2024
6
Posts
3
Kudos Received
0
Solutions
07-10-2024
10:15 PM
Hi @upadhyayk04 I am trying to integrate in Cloudera CDP Public Cloud Refer below screenshot for version details
... View more
06-11-2024
09:08 AM
1 Kudo
Hello @VenkataAvinash Yes, you can check the below references:- CDP Private Cloud Base: [0] https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/runtime-release-notes/topics/rt-pvc-runtime-component-versions.html [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/runtime-release-notes/topics/rt-pvc-whats-new-sqoop.html CDP Public Cloud: [2] https://docs.cloudera.com/runtime/7.2.18/release-notes/topics/rt-pubc-runtime-component-versions.html Regards, Rahi -- Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. -- Was your question answered? Please take some time to click on "Accept as Solution" below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
05-01-2024
10:28 PM
1 Kudo
@VenkataAvinash The error you're encountering (java.lang.RuntimeException: org.apache.storm.thrift.TApplicationException: Internal error processing submitTopologyWithOpts) indicates that there's an issue with submitting the Storm topology, but it doesn't directly point to the specific cause. However, based on your configuration and the error message, it seems like there might be an issue with the Kerberos authentication setup or configuration for the Storm Nimbus service. =>Review Kerberos Configuration: Double-check the Kerberos configuration for Storm Nimbus and ensure that it matches the settings in your storm.yaml file. Verify that the Kerberos principal (hdfs/hari-cluster-test1-master0.avinash.ceje-5ray.a5.cloudera.site@AVINASH.CEJE-5RAY.A5.CLOUDERA.SITE) and keytab file (/root/hdfs.keytab) are correctly specified. =>Check Keytab Permissions: Ensure that the keytab file /root/hdfs.keytab has the correct permissions set and is accessible by the Storm Nimbus service. =>Verify Service Principals: Confirm that the Kerberos principal (hdfs/hari-cluster-test1-master0.avinash.ceje-5ray.a5.cloudera.site@AVINASH.CEJE-5RAY.A5.CLOUDERA.SITE) is correctly configured for the Storm Nimbus service and that it has the necessary permissions to access HDFS. =>Check Nimbus Logs: Check the Nimbus logs (nimbus.log) for any additional error messages or stack traces that might provide more insight into the issue. =>Classpath Issues:Confirm that the versions of Storm, HDFS, and Kerberos libraries on your cluster are compatible with each other. Refer to the documentation for each component for known compatibility issues. =>Try submitting a simpler topology without the HDFS bolt initially to see if the basic Kerberos configuration works. This can help isolate the issue further. =>Consider using a tool like klist to verify if your user has successfully obtained a Kerberos ticket before submitting the topology.
... View more
04-26-2024
02:20 AM
1 Kudo
Flume, Storm, Druid, Falcon, Mahout, Ambari, Pig, Sentry, and Navigator have changed or been removed in CDP with replaced components . For Storm can be replaced with Cloudera Streaming Analytics (CSA) powered by Apache Flink. Contact your Cloudera account team for more information about moving from Storm to CSA. You can refer comparing Storm and Flink also Migrating from Storm to Flink.
... View more
04-14-2024
05:00 AM
1 Kudo
Make Sure that HDFS Service checkbox is checked in ClouderaManager>Kafka>Configuration For me after i checked HDFS Service it was resolved
... View more