Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 502 | 06-04-2025 11:36 PM | |
| 1046 | 03-23-2025 05:23 AM | |
| 547 | 03-17-2025 10:18 AM | |
| 2044 | 03-05-2025 01:34 PM | |
| 1278 | 03-03-2025 01:09 PM |
03-25-2020
05:57 AM
@npdell Before starting the upgrade did you by any change validate the upgrade path with Cloudera supportmatrix this should have been your first reference source.
... View more
03-25-2020
05:42 AM
@desind I can see the error Authentication is not valid but it seems you didn't use the format super:password->super:DyNYQEQvajljsxlhf5uS4PJ9R28= instead, your input was as below according to the steps you shared. addauth digest super:password And then delete the znode that should work [zk: xxx.unx.sas.com(CONNECTED) 2] deleteall /kafka-acl/Topic Please do that and revert
... View more
03-24-2020
11:24 AM
@ARVINDR Yes, its possible first questions first Don't attempt this if this is a production cluster !!! Only when it's your scratch/dev cluster. What is the HDP version? The username mapping is managed by the Isilon admin I don't know how the authorization works maybe the HDP admin user has been delegated to Isilon where changing it at the cluster level won't synchronize with the OS. After the above response, we can come up with a procedure
... View more
03-24-2020
10:06 AM
@desind By default, Zookeeper runs without the option of becoming a superuser to administrate znodes in the ZK ensemble, for example, to fix ACLs, remove znodes that are not required anymore, or create new ones in specific locations. Zookeeper grants permissions through ACLs through different schemas or authentication methods, such as 'world', 'digest', or 'sasl' if we use Kerberos. We can potentially we locked out if we were to grant everyone just read permissions to a znode, as we would not be able to delete it or modify it anymore.
... View more
03-24-2020
09:26 AM
@desind I tweaked it a little bit it should work in Cloudera Go to Cloudera zookeeper server home # cd $CDH_HOME/zookeeper-server Run below command java -cp "./zookeeper.jar:lib/slf4j-api-1.6.1.jar" org.apache.zookeeper.server.auth.DigestAuthenticationProvider super:password The output should look like below SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See <a href="<a href="http://www.slf4j.org/codes.html#StaticLoggerBinder" target="_blank">http://www.slf4j.org/codes.html#StaticLoggerBinder</a>" target="_blank"><a href="http://www.slf4j.org/codes.html#StaticLoggerBinder</a" target="_blank">http://www.slf4j.org/codes.html#StaticLoggerBinder</a</a>> for further details.
super:password->super:DyNYQEQvajljsxlhf5uS4PJ9R28= Copy the super:DyNYQEQvajljsxlhf5uS4PJ9R28= text and login to Cloudera Manager and goto zookeeper config. Add below to zookeeper-env template config export SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Dzookeeper.DigestAuthenticationProvider.superDigest=super:DyNYQEQvajljsxlhf5uS4PJ9R28=" Save and Restart Zookeeper and launch zookeeper shell on CDH cli # . /bin/zkCli.sh -server your_server.com addauth as below Now to removing the ACL should work Now try to delete an ACL in zookeeper this should work. addauth digest super:password Unfortunately, I don't have a CDH sandbox so you might have to adjust some cmds
... View more
03-24-2020
09:19 AM
@ARVINDR In a usual setup, the hive service should run on a master node and you run hive client on the client nodes ie. edge nodes and data nodes because during deployment Ambari copies the master configuration to all clients host e.g copy hive config to all host where hive client has been installed.
... View more
03-24-2020
08:23 AM
@ARVINDR Why did you have to re-install the service? The issue is with Reason: Error mapping uname 'hive' to uid remember the hive uid is mapped to root. I am not an Isilon expert so I am really handicapped here Please let me know
... View more
03-24-2020
07:33 AM
@kvinod I can see the setuid bit (drwxr-s---) was set which alters the standard behavior so that the group of the files created inside said directory, will not be that of the user who created them, but that of the parent directory itself $ ls -lrt /disk1/yarn/nm/usercache
total 4
drwxr-s--- 4 mcaf yarn 4096 Feb 24 01:26 mcaf Can you remove the setuid bit as the root user # chmod -s /disk1/yarn/nm/usercache/mcaf Then rerun Question1. You don't need to explicitly change file permission when you enable Kerberos, it should work out of the box Question2. You don't need to regenerate a new mcafmerged.keytab just copy it to you other edge nodes it should work as that edge node is also part of the cluster Please revert
... View more
03-24-2020
06:51 AM
@ARVINDR In addition to @stevenmatison switch command which people usually ignore Add a Hive proxy To prevent network connection or notification problems, you must add a hive user proxy for HiveServer Interactive service to access the Hive Metastore Steps: In Ambari, select Services > HDFS > Configs > Advanced. In Custom core-site, add the FQDNs of the HiveServer Interactive host or hosts to the value of hadoop.proxyuser.hive.hosts. Save the changes. Can you also set the hive.server2.enable.doAs hive.server2.enable.doAs=true --> Run hive scripts as end user instead of Hive user. hive.server2.enable.doAs=false --> All the jobs will run as hive user.
... View more