Member since
12-20-2022
84
Posts
19
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
198 | 05-08-2025 06:27 AM | |
252 | 04-02-2025 11:35 PM | |
249 | 03-23-2025 11:30 PM | |
234 | 03-06-2025 10:11 PM | |
675 | 10-29-2024 11:53 PM |
05-08-2025
06:27 AM
Hi @anonymous_123 , Generally the RM heap calculation depends on the yarn.resourcemanager.max-completed-applications value and the number of applications running daily. Default value for yarn.resourcemanager.max-completed-applications is 10000 but if you see that you dont have enough applications running you can set this to 6000. Regarding 4GB heap that is production level RM heap and it is fine if you are not seeing any heap related errors.
... View more
04-15-2025
01:25 AM
Hi @Jaguar , Can you please get the RM logs and grep with Ranger in RM and check that. Do you have the cm_yarn service plugin setup in Ranger?
... View more
04-02-2025
11:35 PM
1 Kudo
Hi @anonymous_123 , Yes you can use Iceberg Table with Spark and to authorise with Ranger. You need to set two permissions one for the Iceberg Metadata files and One for global policy to give permission to iceberg on all tables. Please follow this document https://docs.cloudera.com/runtime/7.3.1/iceberg-how-to/topics/iceberg-setup-ranger.html
... View more
04-02-2025
10:07 PM
Hi @satvaddi , Please follow the below actions to setup the policies in RAZ for Spark. Spark doesnt have any plugin of its own so the data accessed on S3 will be logged. Other than that the table metadata will be logged from HMS. Running the create external table [***table definition***] location ‘s3a://bucket/data/logs/tabledata’ command in Hive requires the following Ranger policies: An S3 policy in the cm_s3 repo on s3a://bucket/data/logs/tabledata for hive user to perform recursive read/write. An S3 policy in the cm_s3 repo on s3a://bucket/data/logs/tabledata for the end user. A Hive URL authorization policy in the Hadoop SQL repo on s3a://bucket/data/logs/tabledata for the end user. Access to the same external table location using Spark shell requires an S3 policy (Ranger policy) in the cm_s3 repo on s3a://bucket/data/logs/tabledata for the end user.
... View more
03-24-2025
01:56 AM
In YARN, resource allocation discrepancies can occur due to the way resource calculation is handled. By default, resource availability is determined based on available memory. However, when CPU scheduling is enabled, resource calculation considers both available memory and vCores. As a result, in some scenarios, nodes may appear to allocate more vCores than the configured limit while simultaneously displaying lower available resources. This happens due to the way YARN dynamically assigns vCores based on workload demands rather than strictly adhering to preconfigured limits. Additionally, in cases where CPU scheduling is disabled, YARN relies solely on memory-based resource calculation. This may lead to negative values appearing in the YARN UI, which can be safely ignored, as they do not impact actual resource utilization.
... View more
Labels:
03-23-2025
11:30 PM
1 Kudo
No the job wont fail as by default the work preserve is enabled on YARN Resource Manager and Node Manager.
... View more
03-06-2025
10:11 PM
Hi @sdbags , You can recover the corrupted block if you have set the replication factor to default of 3.
... View more
11-13-2024
02:06 AM
1 Kudo
Tagging @paras for CM.
... View more
10-29-2024
11:53 PM
1 Kudo
Hi @yoshio_ono , Please check this article https://my.cloudera.com/knowledge/How-to-calculate-memory-and-v-core-utilization-for-each-of-the?id=271149.
... View more
10-10-2024
05:13 AM
1 Kudo
Hi @evanle96 This error is not an issue. Usually in the HA setup the call goes to both the NN and the Active NN acknowledges the call but the standby NN will through this warning. So you can ignore this warning here.
... View more