Member since
09-09-2019
7
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
620 | 02-21-2022 12:30 PM |
02-27-2022
01:52 PM
Hi All, When I list yum packages on the Redhat/Centos server which is CDH runs, I see some samba packages. Are they necessary or related to some Cloudera services? I have also checked whether there are dependencies on samba but there are no on Linux level. We need to remove or update this package due to CVE-2021-44142. Note: First, We wanted to open a ticket through Cloudera support but it is not possible to open a ticket other than any CDH/CDP component. How can we know more detail about that? Thanks.
... View more
Labels:
02-21-2022
11:41 PM
Hello, Apache druid cannot be added as a separate service in CDP but I can see its binaries under the CDH-7.1.7 parcel folder. Is Druid available and supported in CDP?
... View more
02-21-2022
12:30 PM
2 Kudos
Yes, it is. You can find more detail about upgrade paths here: https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/index.html
... View more
02-12-2022
05:22 AM
We noticed that this problem occurs only when we enable TLS in Ranger.
... View more
02-10-2022
02:48 PM
1 Kudo
You need to install openldap-clients Linux package, which includes ldapsearch tool. yum install openldap-clients You should also pay attention to this documentation while you are enabling the Kerberos. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_sg_intro_kerb.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--76dd
... View more
02-10-2022
07:11 AM
Hello, we have upgraded our CDH cluster to CDP. After the upgrade, our Sentry policies have been migrated automatically to Ranger. However, Ranger-integrated CDP services cannot interact to get their policies. When I look at the service logs I see some errors which are given below. (Same error in kafka, hdfs, atlas services) Does anyone see this error before? 2:55:00.682 PM ERROR RangerAdminRESTClient
[PolicyRefresher(serviceName=cm_hive)-31]: Failed to get response, Error is : TrustManager is not specified
2:55:00.682 PM ERROR RangerAdminRESTClient
[PolicyRefresher(serviceName=cm_hive)-31]: Error getting Roles; Received NULL response!!. secureMode=true, user=hive/<host>@REALM (auth:KERBEROS), serviceName=cm_hive
2:55:00.700 PM ERROR RangerAdminRESTClient
[PolicyRefresher(serviceName=cm_hive)-31]: Failed to get response, Error is : TrustManager is not specified
2:55:00.700 PM ERROR RangerAdminRESTClient
[PolicyRefresher(serviceName=cm_hive)-31]: Error getting policies; Received NULL response!!. secureMode=true, user=hive/<host>@REALM (auth:KERBEROS), serviceName=cm_hive Thank you.
... View more
Labels:
01-31-2022
02:36 AM
Hi all, I am getting the below error when I perform hdfs namenode rollback during CDH 6 rollback from CDP 7; sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback.namenode/ namenode -rollback I am following the rollback steps from this documentation, the step that I performed is Step 9->3f; https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdh6/topics/install_rollback-cdh6.html STARTUP_MSG: java = 1.8.0_181
************************************************************/
22/01/31 09:25:31 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
22/01/31 09:25:31 INFO namenode.NameNode: createNameNode [-rollback]
22/01/31 09:25:32 INFO namenode.FSEditLog: Edit logging is async:true
22/01/31 09:25:32 INFO namenode.FSNamesystem: KeyProvider: null
22/01/31 09:25:32 INFO namenode.FSNamesystem: fsLock is fair: true
22/01/31 09:25:32 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
22/01/31 09:25:32 INFO namenode.FSNamesystem: fsOwner = hdfs/node-3@MYREAL.COM (auth:KERBEROS)
22/01/31 09:25:32 INFO namenode.FSNamesystem: supergroup = supergroup
22/01/31 09:25:32 INFO namenode.FSNamesystem: isPermissionEnabled = true
22/01/31 09:25:32 INFO namenode.FSNamesystem: Determined nameservice ID: nameservice1
22/01/31 09:25:32 INFO namenode.FSNamesystem: HA Enabled: true
22/01/31 09:25:32 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
22/01/31 09:25:32 WARN util.CombinedHostsFileReader: /etc/hadoop/conf.rollback.namenode/dfs_all_hosts.txt has invalid JSON format.Try the old format without top-level token defined.
22/01/31 09:25:32 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
22/01/31 09:25:32 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
22/01/31 09:25:32 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
22/01/31 09:25:32 INFO blockmanagement.BlockManager: The block deletion will start around 2022 Jan 31 09:25:32
22/01/31 09:25:32 INFO util.GSet: Computing capacity for map BlocksMap
22/01/31 09:25:32 INFO util.GSet: VM type = 64-bit
22/01/31 09:25:32 INFO util.GSet: 2.0% max memory 6.9 GB = 142.3 MB
22/01/31 09:25:32 INFO util.GSet: capacity = 2^24 = 16777216 entries
22/01/31 09:25:32 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = true
22/01/31 09:25:32 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=3des
22/01/31 09:25:32 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
22/01/31 09:25:32 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
22/01/31 09:25:32 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 1
22/01/31 09:25:32 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
22/01/31 09:25:32 INFO blockmanagement.BlockManager: defaultReplication = 3
22/01/31 09:25:32 INFO blockmanagement.BlockManager: maxReplication = 512
22/01/31 09:25:32 INFO blockmanagement.BlockManager: minReplication = 1
22/01/31 09:25:32 INFO blockmanagement.BlockManager: maxReplicationStreams = 20
22/01/31 09:25:32 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
22/01/31 09:25:32 INFO blockmanagement.BlockManager: encryptDataTransfer = false
22/01/31 09:25:32 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
22/01/31 09:25:32 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
22/01/31 09:25:32 INFO util.GSet: Computing capacity for map INodeMap
22/01/31 09:25:32 INFO util.GSet: VM type = 64-bit
22/01/31 09:25:32 INFO util.GSet: 1.0% max memory 6.9 GB = 71.1 MB
22/01/31 09:25:32 INFO util.GSet: capacity = 2^23 = 8388608 entries
22/01/31 09:25:33 INFO namenode.FSDirectory: ACLs enabled? true
22/01/31 09:25:33 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
22/01/31 09:25:33 INFO namenode.FSDirectory: XAttrs enabled? true
22/01/31 09:25:33 INFO namenode.NameNode: Caching file names occurring more than 10 times
22/01/31 09:25:33 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: true, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true
22/01/31 09:25:33 INFO util.GSet: Computing capacity for map cachedBlocks
22/01/31 09:25:33 INFO util.GSet: VM type = 64-bit
22/01/31 09:25:33 INFO util.GSet: 0.25% max memory 6.9 GB = 17.8 MB
22/01/31 09:25:33 INFO util.GSet: capacity = 2^21 = 2097152 entries
22/01/31 09:25:33 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.cloudera.navigator.audit.hdfs.HdfsAuditLoggerCdh5
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1044)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:878)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:726)
at org.apache.hadoop.hdfs.server.namenode.NameNode.doRollback(NameNode.java:1376)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
Caused by: java.lang.ClassNotFoundException: com.cloudera.navigator.audit.hdfs.HdfsAuditLoggerCdh5
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1037)
... 5 more
22/01/31 09:25:33 INFO namenode.FSNamesystem: Stopping services started for active state
22/01/31 09:25:33 INFO namenode.FSNamesystem: Stopping services started for standby state
22/01/31 09:25:33 ERROR namenode.NameNode: Failed to start namenode.
java.lang.RuntimeException: java.lang.ClassNotFoundException: com.cloudera.navigator.audit.hdfs.HdfsAuditLoggerCdh5
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1044)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:878)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:726)
at org.apache.hadoop.hdfs.server.namenode.NameNode.doRollback(NameNode.java:1376)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1620)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
Caused by: java.lang.ClassNotFoundException: com.cloudera.navigator.audit.hdfs.HdfsAuditLoggerCdh5
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1037)
... 5 more
22/01/31 09:25:33 INFO util.ExitUtil: Exiting with status 1: java.lang.RuntimeException: java.lang.ClassNotFoundException: com.cloudera.navigator.audit.hdfs.HdfsAuditLoggerCdh5
22/01/31 09:25:33 INFO namenode.NameNode: SHUTDOWN_MSG: Thanks.
... View more