Member since
08-16-2016
14
Posts
2
Kudos Received
0
Solutions
07-10-2017
11:38 PM
> We have the superuser group defined as 'supergroup' in our configuration. However, this goup does not exist in any of the nodes. This is intentional. The default is set to a name (supergroup) that typically shouldn't exist by default, to protect against unintentional super-users right after install. You are free to modify the supergroup name via the HDFS -> Configuration -> "Superuser Group" field. > If I have to set up this group and start adding a couple of other accounts to have super usr access to hdfs, where should this Linux group be created? Should it be created in all nodes in the cluster? Or is it sufficient to create the Linux group in the Namenode hosts only? The general and bulletproof approach to adding Linux local groups and usernames in cluster is always "all hosts" when you use no centralized user/group management software (such as an AD via LDAP, etc.). The reason is that your host assignments are not static in the life of the cluster, so while doing the group additions on the NameNode(s) will work immediately, you will face weird authorization issues in future when a NameNode host needs to be migrated or replaced with another. Likewise when security may be turned on in future, it'd require local accounts on worker hosts.
... View more
07-06-2017
09:29 AM
Presumably, Kerberos is enabled or you wouldn't be getting this error at all. All users must have a valid ticket from a KDC. This typically means running kinit prior to running any commands or jobs. You can also get a ticket using a keytab file, which is just a store version of the users password. The ticket is store in the ticket cache on the system. By default it is /tmp/krb5cc_<userid>. The client will check here first for a ticket. I would venture that some other process is getting a ticket and storing it in the ticket cache and the other processes are able to use it. This is likely since you are using the 'hdfs' account that the HDFS processes are running under. I strongly encourage you to not operate in this fashion. Instead of using the 'hdfs' account update the Superuser Group setting in CM to include a group that you wish to have HDFS superuser access, which I assumed is why you are using 'hdfs' in the first place.
... View more
02-02-2017
01:32 AM
Hi Michalis, This solution is not working for me. I get error like: pg_dump: symbol lookup error: pg_dump: undefined symbol: PQconnectdbParams Upon running command: pg_dump -h hostname -p 7432 -U scm scm > /tmp/scm_server_db_backup.$(date +%Y%m%d) Could you pls help ?
... View more
10-07-2016
01:44 AM
1 Kudo
Let's say your dataDir and old dataLogDir is /var/lib/zookeeper and now you're moving dataLogDir to /var/lib/zookeeper-log. First you change this in the service-wide configuration, which will make the stale configuration icon appear. Then you stop zk1, ssh into zk1 and run the following commands: $ mkdir -p /var/lib/zookeeper-log/version-2 $ cp /var/lib/zookeeper/version-2/log.* /var/lib/zookeeper-log/version-2/ $ chown -R zookeeper:zookeeper /var/lib/zookeeper-log Then you can start zk1 and wait until it's running and shows as either leader or follower in the Cloudera Manager service page too. After that's done, you can do the same with zk2 and finally with zk3 too. By this point the stale configuration alert should disappear and everything should be fine cluster-wide. As you said, the log.* files need to be copied only.
... View more