Member since
09-25-2015
53
Posts
32
Kudos Received
4
Solutions
01-26-2026
07:41 AM
Summary If you have lost your RMON db password and need to retrieve it, in CM GUI it is **** out. This article will tell you have you can retrieve it in plain text. Symptoms Any customer needing to get into RMON db but not having the password would need to get the password to get in (assuming they do not have DB ROOT) Instructions To retrieve the password we can do an API call that gives us the RMON db config info. api/v31/cm/service/roleConfigGroups/mgmt-REPORTSMANAGER-BASE/config an example would be: curl -u admin:admin GET http://ccycloud-1.oteixeira-ubuntu2004.root.comops.site:7180/api/v31/cm/service/roleConfigGroups/mgmt-REPORTSMANAGER-BASE/config
And the results would appear similar to: {
"items" : [ {
"name" : "headlamp_database_host",
"value" : "ccycloud-1.oteixeira-ubuntu2004.root.comops.site:3306"
}, {
"name" : "headlamp_database_name",
"value" : "reportsmanager9f05b8e6450d2994f37859d1ee3f1967"
}, {
"name" : "headlamp_database_password",
"value" : "reportsmanager9f"
}, {
"name" : "headlamp_database_type",
"value" : "mysql"
}, {
"name" : "headlamp_database_user",
"value" : "reportsmanager9f"
}, {
"name" : "headlamp_heapsize",
"value" : "1073741824"
}, {
"name" : "process_auto_restart",
"value" : "false"
}, {
"name" : "process_swap_memory_thresholds",
"value" : "{\"critical\":\"never\",\"warning\":\"never\"}"
} ]
}
... View more
Labels:
01-26-2026
07:36 AM
Summary Yarn distributed shell test job fails when running with Ldap group mapping turned on with basic CDP 7.1.9 SP1 cluster using Java 17 JDK. 24/12/11 15:21:29 ERROR distributedshell.ApplicationMaster: Error running ApplicationMaster java.lang.IllegalAccessError: class org.apache.hadoop.security.LdapGroupsMapping (in unnamed module @0x3901d134) cannot access class com.sun.jndi.ldap.LdapCtxFactory (in module java.naming) because module java.naming does not export com.sun.jndi.ldap to unnamed module @0x3901d134 at org.apache.hadoop.security.LdapGroupsMapping.<clinit>(LdapGroupsMapping.java:264) Symptoms Applies to New clusters using CDP 7.1.9 SP1 and Java 17 JDK with Ldap group mapping on. Instructions Need to add In Yarn - Configuration - search for yarn.nodemanager.admin-env and change the NodeManager Group (all of them if you have more then one) to: JDK_JAVA_OPTIONS=--add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.regex=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-exports=java.base/sun.net.dns=ALL-UNNAMED --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.naming/com.sun.jndi.ldap=ALL-UNNAMED --add-opens=java.naming/com.sun.jndi.ldap=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-exports=java.base/sun.net.dns=ALL-UNNAMED --add-exports=java.base/sun.net.util=ALL-UNNAMED,MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
... View more
Labels:
10-22-2025
01:12 PM
Summary If you are using Java Garbage collection arguments in your settings, when you move to Java 11 or 17 from Java 8 or prior, the settings have changed. Applies to Anyone who has moved to Java 11 or 17 and requires GC logging. Instructions In the old GC (Java 8 and prior) logging settings, you would use something similar to: -Xloggc:/var/log/cloudera-scm-server/gc.log
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintGCDateStamps In Java 11 and 17 this will actually cause an error as these are no longer used. To get similar output we recommend: -Xlog:gc*,time:file=/var/log/cloudera-scm-server/gc-%t.log where gc* will log anything related to GC up to INFO level time will give the equivilant data that used to be PrintGCTimeStamps and PrintGCDateStamps file will allow you to set an output file and it's location %t gives you a date stamp in the file name on creation (ex. gc-2025-05-23_11-56-54.log)
... View more
Labels:
07-23-2024
06:44 AM
Cloudera Manager also has a great way to set class level debug on the fly without restart when needed to troubleshoot. The way to do this is to navigate to: http://<cm-server>:7180/cmf/debug/logLevel Once at this page, chose the class you wish to change, choose the radio button for the level that you wish to change it to, and hit the submit button. Understand that these changes will NOT persist a restart of the server but will start logging at the new level as soon as you hit the submit button on the logLevel page.
... View more
07-23-2024
06:33 AM
The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace. The permanent generation has been removed. The PermSize and MaxPermSize are ignored and a warning is issued if they are present on the command line.
... View more
09-10-2020
11:32 AM
In case ambari API doesn't help. You can use following SQL queries to abort hung/queued/pending operations directly from ambari database. 1) Stop Ambari-Server ambari-server stop 2) Take backup of ambari database before making any changes to ambari db 3) Check IN_PROGRESS, QUEUED and PENDING operations. ambari=> select task_id,role,role_command from host_role_command where status='IN_PROGRESS';
ambari=> select task_id,role,role_command from host_role_command where status='QUEUED' limit 100;
ambari=> select task_id,role,role_command from host_role_command where status='PENDING'; 4) Abort Operations ambari=> update host_role_command set status='ABORTED' where status='QUEUED';
ambari=> update host_role_command set status='ABORTED' where status='PENDING';
ambari=> update host_role_command set status='ABORTED' where status='IN_PROGRESS'; 5) Start ambari-server ambari-server start
... View more
03-21-2017
03:58 PM
2 Kudos
In the past, if Select returned no rows, hadoop would still create an empty file. Some customer would use this file to help with their work flow. In https://issues.apache.org/jira/browse/HIVE-13040 this was fixed, because anyone using cloud based solutions could be charged for empty files, so it was determined that hadoop should not write the files.
... View more
Labels:
12-03-2015
07:53 PM
1 Kudo
Problem: When trying to distcp to AWS, this error is reported: 2015-12-03 09:50:01,132 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3a.S3AFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:224) and also: Caused by: java.lang.ClassNotFoundException: com.amazonaws.event.ProgressListener Solution: In ambari, under yarn, configs please add:
/usr/hdp/2.3.2.0-2950/hadoop/* and /usr/hdp/2.3.2.0-2950/hadoop/lib*
to the yarn.application.classpath Then under mapred, configs, do the same for mapreduce.application.classpath restart all affected services and it should work.
... View more
Labels:
08-01-2018
08:13 AM
1 Kudo
If you want a Postgresql HA for your Ambari and the other components like Ranger, Hive, Oozie, you have to use an external postgresql database, not the embeded. In front of your Postgresql HA, use a connection pooler like pgbouncer or pgpool, so you will reduce the impact of opening a client connection to the database by keeping sessions open between the connection pooler and the database.
... View more