Member since
01-16-2018
593
Posts
38
Kudos Received
94
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
276 | 05-17-2023 10:41 PM | |
1130 | 04-03-2023 09:44 PM | |
349 | 04-03-2023 02:40 AM | |
452 | 03-10-2023 07:36 AM | |
496 | 03-10-2023 07:17 AM |
05-17-2023
10:41 PM
Hello @Rekha35 Thanks for using Cloudera Community. First, I shall answer the Q from Cloudera's perspective. Cloudera Data Engineering offering (CDE) offers Airflow built-in without any manual intervention. CDE is available on Cloudera Public Cloud & Private Cloud offering. The integration with other Platform Services is Seamless & offers an immersive experience to End-Users looking to explore Airflow. Note that your Team can integrate with an External Airflow [2] as well. Considering your ask was with checking out Airflow, You may try out [3], wherein the "Astronomer Apache Airflow Fundamentals Certification" offers the best starting point (Including Installing Locally). The Course is led by the foremost Airflow Instructor Marc Lamberti & helps anyone delve into Airflow seamlessly. Additionally, You asked about checking Airflow on Cloud owing to having issues with deploying the same locally. Each Cloud Provider (AWS, Azure, GCP) offers a Managed Service for Airflow. Hope your Q has been addressed by us. Feel free to share any further ask. Regards, Smarak [1] https://docs.cloudera.com/data-engineering/cloud/orchestrate-workflows/topics/cde-airflow-dag-pipeline.html [2] https://docs.cloudera.com/data-engineering/cloud/orchestrate-workflows/topics/cde-airflow-provider.html? [3] https://www.astronomer.io/certification/
... View more
04-03-2023
09:44 PM
Hello @RammiSE Your Post is being replied a bit late, yet I am posting a response anyways. Assuming your Team has resolved the Issue, Appreciate your Team sharing the details in the Post for wider audience. For HMaster to be Initialised, "hbase:meta" & "hbase:namespace" Table Region needs to be Online. In your previous thread, the HMaster is reporting "hbase:meta" isn't Online [1]. As such, Use the HBCK2 JAR to assign the "hbase:meta" Region " 1588230740" first & review (Via HBase UI) whether Regions are being assigned successfully. It's feasible the "hbase:namespace" Table Region would also reporting similar tracing, in which case your Team needs to use HBCK2 JAR to assign the "hbase:namespace" Region. Restarting HMaster after manually performing HBCK2 Assign isn't required always, yet the same won't harm as well. Regards, Smarak [1] 2023-01-23 16:05:34,990 WARN [master/ctrlsu-hbaseMS:16000:becomeActiveMaster] master.HMaster: hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1674468867063, server=hadoop-datanode2,16020,1674362337687}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined
... View more
04-03-2023
02:40 AM
Hello @moment Thanks for using Cloudera Community. Based on the Post, CM Agent Setup fails with "Unsupported RHEL Release". The same is feasible if the OS used by Customer isn't Compatible or Supported with the Cloudera Manager/CDP Release. Review Support Matrix [1] & ensure the OS used is Compatible based on the Cloudera Manager/CDP Release being used by your Team. Regards, Smarak [1] https://supportmatrix.cloudera.com/
... View more
03-10-2023
07:41 AM
Hello @mingtian Hope you are doing well. We wish to check if your Q concerning Balancer skipping any Region-Movement has been answered by us. If Yes, Kindly mark the Post as Solved. If No, Feel free to share any Q pertaining to the Post. Regards, Smarak
... View more
03-10-2023
07:36 AM
Hello @Ivoz Thanks for using Cloudera Community. Kindly refer [1] for PowerScale Compatibility with CDP Stack. As per [1], CDP v7.1.7SP1 would support PowerScale 9.2, 9.3. As such, Your team can proceed with 9.2 without any concerns. Regards, Smarak [1] Third-party filesystem support: Dell EMC PowerScale (cloudera.com)
... View more
03-10-2023
07:17 AM
1 Kudo
Hello @josr89 Thanks for using Cloudera Community. You have mentioned Hadoop 3.1.1 & I am assuming you aren't referring to any HDP/CDH (Legacy) or CDP Platforms. The Steps to recover Knox Admin Password is more akin to reset of Knox Admin Password & does involve re-provisioning the Certificates and Credentials. This is a fairly risky Operation & we would recommend performing the same with Cloudera Support. The Doc [1] covers the same for your reference. Regards, Smarak [1]Change the Master Secret (cloudera.com)
... View more
03-05-2023
09:29 PM
Hello @snm1523 Thanks for the Checks done so far. For SPNEGO, I was referring to [1]. This is based on the assumption that Kerberos is enabled for the Solr Service. If Yes, Kindly review [1]. Considering the AuthN is confirmed & SPNEGO Check would confirm AuthZ, I am unable to confirm additional factors, which may cause such issues as well. For Sanity Check, Your team is able to use the CLI [2] is working for your team correctly ? Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/security-how-to-guides/topics/cm-security-enable-web-auth-s19.html [2] https://docs.cloudera.com/cdp-private-cloud-base/7.1.7/search-solrctl-reference/topics/search-solrctl-ref.html
... View more
03-03-2023
08:04 AM
Hello @AmitBIDWH We hope your Team's queries have been addressed by us. As such, We shall mark the Post as Solved now. If you have any further ask, Feel free to Update the Case likewise. Regards, Smarak
... View more
03-03-2023
08:04 AM
Hello @snm1523 Thanks for using Cloudera Community. Generally, I have observed such "Red Lines" when the User isn't authorised rightly. Review the Ranger Permission against your Username & confirm if the right privileges are granted. Secondly, Try confirming if the Issue persists in Incognito/Private Browser mode to rule out any weird browser concerns. Third, SPNEGO is setup correctly. If the above Checks doesn't yield the Outcome expected, You may prefer opening a Support Case for our Support folks to engage with your Team for a quicker resolution. Regards, Smarak
... View more
02-28-2023
09:04 PM
Hello @quangbilly79 Thanks for using Cloudera Community. The "Spark Master" refers to the Resource Manager responsible for allocating resources. Since you are using YARN, Your Team needs to use " --master yarn". The usage of " --master spark://<IP Address>:7077" is for Spark Standalone Cluster, which isn't the Case for your team. To your Observation concerning the "Driver Instance" & "Worker Instance" being added via "Add Role Instance", there is no such Option as YARN is the Resource Manager, which shall allocate the resources for Spark Driver & Executors. Review [1] for the usage of "--master" as well. Hope the above answers your Team's queries. Regards, Smarak [1] https://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit
... View more