Member since
11-17-2021
1108
Posts
251
Kudos Received
28
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 154 | 10-16-2025 02:45 PM | |
| 328 | 10-06-2025 01:01 PM | |
| 330 | 09-24-2025 01:51 PM | |
| 290 | 08-04-2025 04:17 PM | |
| 407 | 06-03-2025 11:02 AM |
09-09-2022
04:43 PM
@VenkatG Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
09-02-2022
09:48 AM
@Bulu As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
09-01-2022
10:30 AM
@IslamGamal Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information @ckumar has requested?
... View more
08-30-2022
07:18 PM
Hi @Ploeplse , what you ran into was not a user problem. But you may ran into user problem after you fix this one. The error "the site can't be reached" is actually a network connectivity issue instead of IAM issue. I ran into this issue while I'm using an Azure environment. I made an `nslookup` against the FQDN in the url, and turned out it came back with a private IP address in the k8s cluster node subnet, which means you actually need connectivity from your browser to the K8s network that facilitating the environment. In my case, my browser is on a VM on Azure residing in another VNET, so what I did was to peer the network between my browser VM and the K8s VNET. If you are using AWS, you also need build connectivity between the AWS PVC to your browser operating system. If it is your laptop, you need VPN setup; if it is a VM, you need peer the network. Cloudera documentation missed this step in setup or it is hidden somewhere. While, after I fixed the network issue, I ran into 403 issue, which is actually an IAM issue. Still working on it. Good luck.
... View more
08-23-2022
09:52 AM
Here are some highlights from the month of July
231 new support questions
8 new community articles
586 new members
Rank
Community Article
Author
Components/ Labels
#1
MySQL CDC with Kafka Connect/Debezium in CDP Public Cloud
@cnelson2
Apache Kafka
Cloudera Data Platform (CDP)
Kerberos
#2
Event driven pipelines in Azure with CDE(Cloudera Data Engineering) orchestrated by Azure Event Grid/Azure Functions
@hrongali
Apache Spark
Cloudera Data Engineering (CDE)
Cloudera Data Platform (CDP)
#3
How to configure CML's Spark Connection
@peter_ableda
Cloudera Data Science Workbench (CDSW)
Cloudera Machine Learning (CML)
#4
HDP to CDP - Atlas backup and restore
@hpasumarthi
Apache Atlas
Cloudera Data Platform (CDP)
Hortonworks Data Platform (HDP)
#5
"kinit: Preauthentication failed while getting initial credentials" when using Active Directory or FreeIPA
@araujo
Kerberos
Security
171 kudos to @jagadeesan ! Check out the Community Member Spotlight on Cloudera Linkedin!
We would like to recognize the below community members and employees for their efforts over the last month to provide community solutions.
See all our top participants at Top Solution Authors leaderboard and all the other leaderboards on our Leaderboards and Badges page.
@SAMSAL @AbhishekSingh @sayak17 @hegdemahendra @MattWho @araujo @jagadeesan @rki_
Share your expertise and answer some of the below open questions. Also, be sure to bookmark the unanswered question page to find additional open questions.
Unanswered Community Post
Components/ Labels
Apache Atlas use of "Constraints" to change typeDef behaviour?
Apache Atlas
We are getting 501 error while accessing Application master URL using knox gateway
Apache Knox
Status: Failed to receive heartbeat from agent
Cloudera Data Platform (CDP) Cloudera Data Platform Private Cloud (CDP-Private)
What are the best config memory and vcore in YARN and mapred multi-node cluster?
Apache Hadoop Apache Sqoop Apache YARN
Nifi server certificate shows different issuer
Apache NiFi
... View more
08-22-2022
01:17 PM
@Teradata GTS Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
08-22-2022
11:41 AM
@BORDIN Has the reply above helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks!
... View more
08-17-2022
01:09 AM
Hello All, This is an older post which had a few recent followup queries. To close the loop, HBase offers multiple Tools to migrate Data from 1 Cluster to another Cluster like Snapshot, Export-Import, HashTable/SyncTable etc. Most of these Tools relies on MapReduce & uses 1 Mapper per Region of the Source Table. All these Tools works without any concerns. The only part of the ask which can't be answered accurately is the Concurrency/Job Configurations/Mapper Memory etc. These details rely on Customer's Environment Setup & the Bandwidth between the 2 Clusters. As such, Customer can run 1 such HBase MR Job & see the Outcome. Accordingly, Fine-Tune is required. If any issues are observed while performing the above HBase MR Job, Feel free to post the Q in a Community Post for fellow Community Members to review & share their thoughts. Regards, Smarak
... View more
08-16-2022
07:35 AM
@AkramSharkawy If you are still experiencing the issue, can you provide the information @Elias has requested? Thanks
... View more
08-06-2022
09:02 AM
@shrikantbm& team, Yes, in this case, we need to check cleanup.policy of the topic __consumer_offsets. If the existing cleanup.policy=compact then the log segment of this topic will not be deleted. You should follow the below steps to conclude and resolve this issue initially. 1) Check what is current cleanup.policy of the topic __consumer_offsets. You can check it using the command: kafka-topics.sh --bootstrap-server <broker-hostname:9092> --describe OR kafka-topics.sh --zookeeper <zookeeper-hostname:2181> --describe --topics-with-overrides Note: topic_name is the name for which you are facing an issue 2) If you want to clear the old log segment of this topic, then you should set cleanup.policy like cleanup.policy=compact,delete,retention.ms=<30days> compact = when the kafka-log is rolled over, it will be compacted delete - once the offset.retention.ms is reached, the older logs will be removed retention.ms=<30days> > the old log segment will be deleted after 30 days. Note: 30 days are just an example here and this setting will be in ms. You should set it as per your requirement after checking it with the application team and their need. For "delete", the property "log.cleaner.enable" must be set to "true" After configuring this cleanup policy data will be deleted as per retention.ms as suggested above. If you will not set retention.ms then old log segment will be deleted as per retention period set in the CM / Ambari >> kafka >> Conf. The setting is log.retention.hours = <7 Days default> in CM >> Kafka, check what it is in your case so that log segment older than 7 days will be deleted. Kafka will keep checking the old log segment with the help of the property log.retention.check.interval.ms . Important note: The "delete" on consumer offsets is that you may lose offsets which can lead to duplication/data loss. So check it with your application team before setting a deletion policy. 3) If you still face the same issue, then broker logs need to be reviewed for the root cause of the issue and make the changes accordingly. If you found this information helped with your query, please take a moment to log in and click on KUDOS 🙂 and "Accept as Solution" below this post. Thank you.
... View more