Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 778 | 04-08-2025 06:48 AM | |
| 960 | 04-01-2025 07:20 AM | |
| 916 | 04-01-2025 07:15 AM | |
| 962 | 05-06-2024 06:09 AM | |
| 1504 | 05-06-2024 06:00 AM |
06-21-2022
11:38 PM
Hello @LakshmiSegu We hope your query was addressed by Shehbaz's response. In Summary, (I) Ensure you Username has an IDBroker Mapping (Actions > Manage Access > IDBroker Mappings). (II) Include the "spark.yarn.access.hadoopFileSystems" Parameter to point to the S3 Path [1]. Regards, Smarak [1] https://docs.cloudera.com/runtime/7.2.15/developing-spark-applications/topics/spark-s3.html
... View more
06-21-2022
11:17 PM
Hello @caisch Thanks for using Cloudera Community. Based on your Post, you wish to confirm if TTL for HDFS can be set to 90Days & other Services' TTL is set to 14Days. Since you selected Solr, Let me answer your Post & you can let me know if I understood the Post differently. In Solr, We have Collections which may have Data on HDFS or Local. For Example, RangerAudits Collection may have Data on HDFS & Atlas Collection may have Data on Local. At each Collection Level, the "solrconfig.xml" captures the TTL [1] via DocExpirationUpdateProcessorFactory Class. You can configure the TTL at each Collection Level in Solr & they would Cleanup the Underlying Data, be it on HDFS or Local. Using the above Example of RangerAudits using HDFS Storage & Atlas using Local Storage, We can set RangerAudits to expire at 90 Days & Atlas to expire at 14 Days, which in turn would remove the Underlying Data from HDFS & Local respectively for RangerAudits & Atlas. Kindly review & let us know if the above answers your Post. If No, You may clarify & we shall get back to you accordingly. Regards, Smarak [1] https://solr.apache.org/docs/8_4_0/solr-core/org/apache/solr/update/processor/DocExpirationUpdateProcessorFactory.html
... View more
04-19-2022
01:07 AM
Hello @SVK If your queries concerning Apache Airflow has been addressed, Feel free to mark the Post as Solved. If you have any further ask, Kindly share the same & we shall get back to you accordingly. Regards, Smarak
... View more
04-19-2022
01:05 AM
Hello @HiThere We hope your query concerning VMs & recommendations around Hardware is answered. We are marking the Post as Closed. If you have any further concerns, Feel free to post your ask & we shall answer your queries. Regards, Smarak
... View more
04-14-2022
01:29 AM
Hello @HiThere Thanks for using Cloudera Community. To your Query, Kindly refer [1] & [2] for Resource Requirement for CDP v7.1.7. Note that Documentation refers to Hardware requirement in terms of Resources (CPU, Memory, Network, Disk) as opposed to Physical Machines or Virtual Machines. As long as your Team meet the Hardware requirement for Storage & Compute, the Virtualized & Bare-Metal choices shouldn't matter. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/installation/topics/cdpdc-hardware-requirements.html [2] https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/concepts/topics/cm-vpc-networking.html
... View more
04-14-2022
01:23 AM
Hello @SVK Thanks for using Cloudera Community. Based on your post, You wish to confirm if Airflow is Supported by Cloudera. Summarizing the responses shared by my Colleagues for you: (I) Airflow isn't Supported by Cloudera in Standalone Mode, (II) In CDP Public Cloud [1] & CDP Private Cloud [2], CDE (Cloudera Data Engineering) uses Airflow. Any issues encountered with CDE Airflow on CDP Public Cloud & CDP Private Cloud is Supported. (III) CDE allows External Airflow to be used as well. Yet, Supportability is restricted to Cloudera Airflow Providers only. Refer [3]. If your queries are addressed, Feel free to mark the Post as Solved. Regards, Smarak [1] https://docs.cloudera.com/data-engineering/cloud/orchestrate-workflows/topics/cde-airflow-dag-pipeline.html [2] https://docs.cloudera.com/data-engineering/1.3.4/orchestrate-workflows/topics/cde-airflow-dag-pipeline.html [3] https://docs.cloudera.com/data-engineering/cloud/orchestrate-workflows/topics/cde-airflow-provider.html?
... View more
04-14-2022
01:12 AM
Hello @yagoaparecidoti Thanks for using Cloudera Community. Based on the Post, You encountered "Master Is Initializing" & ended up fixing the same using Command found on Internet. You ended up with few Tables in Broken State, requiring you to delete & recreate them. For documenting the Post, the Issue observed is tracked via [1]. In the HMaster Logs, We should see HBase:Meta & HBase:Namespace Region, which aren't being assigned. Sample tracing shared via [1]. Once the same happens, Using HBCK2 Jar to assign the Region being reported as Unassigned is required. The Command to be used via HBCK2 Jar to assign the Region is shared via [1] as well. The HBCK2 Command has other Options, which (If performed without any oversight) may cause issues with HBase Table availability. The "Master Is Initializing" Error Fix as shared via [1] shouldn't cause any Table to be in Broken State & without knowing the explicit details into the Broken State, It's harder to confirm if Delete-Recreate was the ONLY Way Or, We could have restored the Table in other ways. Having said that, the Post covers "Master Is Initializing" Error & the same has been addressed. As such, We shall mark the Post as Closed. Feel free to share your observation with the Tables in Broken State in a New Post, if you wish to engage for the Community observation & feedback. Regards, Smarak [1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2#master-startup-cannot-progress-in-holding-pattern-until-region-onlined
... View more
04-14-2022
12:58 AM
Greetings @yagoaparecidoti Thanks for using Cloudera Community. Based on the Post, you wish to confirm if the "maxClientCnxns" Parameter set via CDM (Assuming you mean CM) for ZooKeeper Service is referred in any file associated. You may refer to the Value of "maxClientCnxns" in "zoo.cfg" file associated with the ZooKeeper Process. In CDP/CDH (Managed by CM), You may refer to the same within the ZooKeeper Process Directory under "/var/run/cloudera-scm-agent/process/<ZooKeeper-Process-Directory>/zoo.cfg". The concerned file would be referred in the "ps" Output of the ZooKeeper Process as well. Regards, Smarak
... View more
04-08-2022
01:06 AM
Greetings @wazzu62 We wish to check if you have reviewed @araujo ask for further checks on the concerned issue. If required, Change the Port for the ATS HBase from 17020 to any Value to see if the same helps, assuming the Port is configured to accept request. Regards, Smarak
... View more
04-08-2022
01:00 AM
Hello @Girija Thanks for using Cloudera Community. Since the Post is an Older one, Wish to confirm if you have resolved the Issue. If Yes, Kindly assist by sharing the Solution for wider community audience. Wish to check if your Team had the Kerberos Ticket before submitting the request. Regards, Smarak
... View more