Member since
07-30-2020
219
Posts
45
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
435 | 11-20-2024 11:11 PM | |
488 | 09-26-2024 05:30 AM | |
1084 | 10-26-2023 08:08 AM | |
1852 | 09-13-2023 06:56 AM | |
2129 | 08-25-2023 06:04 AM |
10-11-2022
02:12 AM
1 Kudo
Hi @fengsh , You can check the already solved below posts to see if that helps. https://community.cloudera.com/t5/Support-Questions/How-to-remove-an-old-HDP-version/m-p/116161 https://community.cloudera.com/t5/Support-Questions/Is-there-any-risk-to-delete-old-HDP-directories/m-p/96183 https://community.cloudera.com/t5/Community-Articles/Remove-Old-Stack-Versions-script-doesnt-work-in-ambari-2-7/ta-p/249303
... View more
10-05-2022
05:04 AM
1 Kudo
Hi, Those parameter won't be exposed by Ambari and would be false by default. The parameters would go into Custom spark-defaults. As they are disabled by default, I would suggest not to enable them.
... View more
09-28-2022
01:57 AM
Hi, Inside Spark, you can check for spark.history.ui.acls.enable and spark.acls.enable. These should be false by default. https://spark.apache.org/docs/2.4.3/security.html#authentication-and-authorization
... View more
09-21-2022
01:20 AM
HI @gocham You can stop the Secondary Namenode and then delete it from the Instance page of HDFS. Decommission is used for Datanodes.
... View more
09-19-2022
02:10 AM
Hi @Anlarin , It is always suggested to have a homogeneous disk storage across Datanodes. Within datanode, if there are heterogeneous volumes, then when the block replicas are written to new disks on a Round Robin fashion, the disks with less capacity will fill up faster compared to the disks with higher size. If the client is local to Node 2, then it will place the 1st block on that node and it's expected to fill faster. By choosing "Available Space Policy" the DNs would take into account how much space is available on each volume/disks when deciding where to place a new replica. To achieve writes that are evenly distribution in percentage of capacity on drives, change the choosing policy (dfs.datanode.fsdataset.volume.choosing.policy)to Available Space. If using Cloudera Manager: Navigate to HDFS > Configuration > DataNode Change DataNode Volume Choosing Policy from Round Robin to Available Space Click Save Changes Restart the DataNodes The above property only helps for volumes within Datanode. https://docs.cloudera.com/documentation/enterprise/latest/topics/admin_dn_storage_balancing.html - Was your question answered? Please take some time to click on “Accept as Solution” below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
09-15-2022
12:22 AM
Hi @isoardi , Seeing sockets in TIME_WAIT state is normal and is by design when the socket is getting closed. Unless we see tens of thousands of sockets in TIME_WAIT state which would consume the ephemeral ports on the host , these are fine. It would be the CLOSE_WAIT sockets we need check which indicates the application has not called the close() call on the socket. You can refer the below RedHat documentation for more info on this and ways to close the TW sockets by reusing them. https://access.redhat.com/solutions/24154
... View more
09-15-2022
12:11 AM
Hi @abdebja , You can refer the instructions provided in the below Cloudera Article to mitigate this issue. https://my.cloudera.com/knowledge/tmp-folder-filling-up-frequently-with-hprof-dump-files?id=340673 - Was your question answered? Please take some time to click on “Accept as Solution” below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-30-2022
10:05 AM
1 Kudo
Hi @noekmc , It is recommended to turn off Tuned on RHEL8 as its being used to set the processor c-state and act as a way to control power utilisation. So as far as recommendation goes, it's good to turn it off and the Doc needs to be updated to include RHEL8. -- Was your question answered? Please take some time to click on “Accept as Solution” below this post. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
08-25-2022
08:44 AM
Hi @KCJeffro , The best possible way would be to change the log level for the thread 'org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy' to 'ERROR'. Do the following: Navigate to Cloudera Manager > HDFS > Config > search for 'NameNode Logging Advanced Configuration Snippet (Safety Valve) log4j_safety_valve' Add the following property: log4j.logger.org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy=ERROR
... View more