Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1543 | 07-09-2019 12:53 AM | |
9293 | 06-23-2019 08:37 PM | |
8052 | 06-18-2019 11:28 PM | |
8676 | 05-23-2019 08:46 PM | |
3473 | 05-20-2019 01:14 AM |
03-22-2015
11:43 PM
Hi Harsh, The TTL option works well on most of the tables/cases. But, flume agents loads data to staging tables contineously. In this case, when we run compaction, the regions will go offline and data load fails. So, I had to turnoff the major compaction. Can you help me on how to handle major compaction on these tables to purge old data using TTL? Thanks
... View more
03-13-2015
09:43 AM
But this still doesn't answer the original question - why is dfs.permissions.superusergroup defaulted to supergroup, and then CM doesn't create it in Linux? In our case, we also found discrepancies in other default Hadoop user/groups created from the documentation: Guide to Special Users in the Hadoop Environment For example, hdfs user not assigned to hdfs group, mapred group not created at all. We are running CDH 5.2.0 on Debian. I've done my share of research, and still find the HDFS user/group permission mechanism confusing. For the plain Linux-CDH installation without Kerberos (the majority I believe), HDFS relies on Unix user/group permission mechanism, but interprets in its own way. Thus the confusing and unintuitive behaviors: Unix root has less privilege than 'hdfs' user in HDFS - treated like an (uninitialized) regular user. No built-in user-admin commands for HDFS similar to Linux useradd, userdel, gpasswd, etc. No tool to 'migrate' existing Linux users to HDFS in bulk Hadoop app users that need to create HDFS files (e.g. mapred, flume, mapred, etc.) are not automatically set up in /user No pre-defined Unix group to include *all* Hadoop app users that need HDFS superuser access Regular users cannot run 'hdfs fsck /', since staging dir /tmp/logs/hdfs is 770 Perhaps Cloudera can write a more understandable adaptation of the Apache HDFS document. Thanks, Miles
... View more
03-03-2015
10:31 PM
Thanks for sharing the solution! Indeed, the -- separator lets you pass the connector specific arguments into the connector itself, than sending it to Sqoop which would not recognize them. We cover this need at http://www.cloudera.com/content/cloudera/en/documentation/connectors/latest/Teradata/Cloudera-Connector-for-Teradata/cctd_use_tpcc.html, as "You can control the behavior of the connector by using extra arguments. Extra arguments must appear at the end of the command, using -- as the delimiter.".
... View more
02-15-2015
05:54 AM
Hi, sorry, no luck. Still suffering from MR2/YARN. I have no idea how it works. Right now I'm getting deadlock several times a day. I have single user which submits jpb. It has huge pool (32*8 mem and 4*CPU) and It has limit for 8 applications at once. Suddenly everything stops. What does it mean? Who can I get the idea of what's went wrong?
... View more
01-12-2015
02:04 PM
2 Kudos
Creating a user in Hue creates the user only in the Hue User database. Creating a user in Cloudera Manager create the user only the the Cloudera Manager User table. Both the user and the groups need to exist on the NameNode host operating system: sudo useradd Peter sudo usermod -G developer Peter If you don't want the user to be able to log in to the NameNode: sudo usermod -s /bin/falso Peter or sudo usermod -s /usr/bin/nologin Peter
... View more
01-07-2015
01:12 AM
More interestingly this differencce dissappeared after upgrading to CDH 5.3.1. T.
... View more
01-06-2015
10:15 PM
It will be resolved in the next 5.2.x and 5.3.x bugfix releases (along with 5.4.0 in future).
... View more
11-30-2014
06:03 AM
One possibility of issue is that your services are incorrectly listening on the wrong interface. What is the output of "netstat -anp | grep 50060" specifically from your 'master' and 'slave' hosts?
... View more
09-20-2014
11:14 PM
Hi Harsh, Should we do same for Oozie node also for KT renewal role?
... View more
08-28-2014
11:04 PM
Ok, i see, thanks again.
... View more