Member since
10-25-2015
13
Posts
0
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2442 | 05-26-2017 06:28 AM | |
1375 | 05-26-2017 06:26 AM | |
1265 | 01-12-2017 03:04 PM |
05-26-2017
06:28 AM
The trick is to create a BDR specific user and add them to the hive or supergroup groups as relevant for hive or general hdfs backups. No facl or sticky bits are required.
... View more
05-26-2017
06:26 AM
@ScottE wrote: https://www.cloudera.com/documentation/enterprise/5-9-x/topics/impala_ssl.html shows a bunch of "TLS/SSL ... Client" properties that no longer appear in CM for CDH 5.9.0. Is there an update to the documentation available that covers this? I have Impala running behind a proxy and I am also wondering about how this fits in. While I am here, HiveServer2 documentation indicates Kerberos and LDAP client authentication can co-exist but CM doesn't allow for this. For the above three items: The "TLS/SSL ... Client" properties are now just prefixed simply "Impala TLS/SSL Server" - this should be a documentation change. If Impala is behind a proxy you need to configure HAProxy with a TLS certificate and have it connect to the Impala Server instances also using TLS. The HAProxy documentation will help, but some additional documentaiton from Cloudera would be nice. HiveServer2 does not support simultaineous kerberos and LDAP authentication (the way Impala does). To achieve this for Hive you need to run a second HiveServer2 instance, configuring one with kerbeos authentication and the other with LDAP.
... View more
05-23-2017
07:20 PM
Solution is to disable Internet Explorer Compatibility Mode. It should be possible to include a meta tag (e.g. https://stackoverflow.com/questions/3449286/force-ie-compatibility-mode-off-using-tags) to achieve this. End-users can do so from the IE cog menu (assuming it is not locked down). Scott
... View more
05-04-2017
11:36 PM
https://www.cloudera.com/documentation/enterprise/5-9-x/topics/impala_ssl.html shows a bunch of "TLS/SSL ... Client" properties that no longer appear in CM for CDH 5.9.0. Is there an update to the documentation available that covers this? I have Impala running behind a proxy and I am also wondering about how this fits in. While I am here, HiveServer2 documentation indicates Kerberos and LDAP client authentication can co-exist but CM doesn't allow for this. Clearly the documentation around client authentication could be better. Any pointers to updates would be appreciated. Thanks, S.
... View more
Labels:
- Labels:
-
Apache Impala
-
Security
05-04-2017
07:52 PM
I am attempting to configure a BDR backup from a secured (kerberos & Sentry with HDFS permission synchronization enabled) CDH 5.9.0 cluster to S3. I can successfully use BDR to backup my own data (e.g. /users/myname) but now I want to backup some Hive/Impala data that is protected by Sentry. I am using HDFS rather than Hive replication (I don't believe this is material to the question). If I configure BDR to run using my own userid, which happens to have full access according to Sentry permissions this results in an AccessControlException org.apache.hadoop.security.AccessControlException: Permission denied: user=myuser, access=READ, inode="/data":hive:hive:drwxrwx--x I would have thought that the fact that Sentry has been configured to synchronize HDFS permissions would have meant that I could run this. According to https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_bdr_hive_replication.html when Kerberos is in use it is necessary to use a user with an ID greater than 1000, so this rules out the hdfs and hive users. It also states that read and execute permissions are needed on the source cluster for BDR to operate. So if my user cannot be used this would means I need to create a BDR user account that has these permissions. The directories I want to back up are protected with Sentry, so as per https://www.cloudera.com/documentation/enterprise/5-9-x/topics/sg_sentry_service_config.html#concept_z5b_42s_p4__section_lvc_4g4_rp these directories have permissions as follows $ hdfs dfs -chown hive:hive /data
$ hdfs dfs -chmod 771 /data Continuing down this path, to be able to use BDR I will need to use an extended ACL to assign rx permissions on the relevant directories to the user. To cater for new directories that come along I am thinking that it would also be necessary to add the sticky bit on this operation. Does the following seem reasonable (running as hdfs user with relevant keytab)? $ hdfs dfs -setfacl -R -m group:backup_users:r-xt /data Information on using the sticky bit is thin on the ground; is this even supported and supported for extended ACLs? Is there something I am missing that makes BDR with a kerberos enabled cluster easier than this? Thanks, S.
... View more
Labels:
01-12-2017
03:04 PM
Thanks to the joy that is AWS Flow Logs I was able to see what was going on. It would appear that the arrow on https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_ports_cm.html for port 9000 is the wrong way around; CM calls into CMA on 9000/TCP not the other way around. Cloudera: You might like to confirm this and update the documentation accordingly.
... View more
01-11-2017
10:15 PM
I am confuguring a CDH 5.9.0 cluster in AWS with Security Groups (effectively firewalls) separating various classes of nodes, e.g. CM is separate to the cluster nodes and Metastore service and HS2 are in separate Security Groups. Cirrently CM is unable to retrieve log file entries and I am unable to"Download Full Log" for various services that are outside of the Security Group within which CM is running. I always thought these were retrieved via CM Agent and that port 7182 into the SG containing CM would be enough, but clearly this is not the case. If I open all ports into the SG containing the cluster nodes then CM is able to successfully access the log entries for say the DataNode role, so this is definitely a port/firewall issue. From the information available on the following two URLs I am unable to determine the specific ports I need to open in order to allow CM to access the DataNode role logs: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_ports_cm.html https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_ports_cdh5.html The issue applies across the board. CM cannot see Role logs for pretty much any service not within its Security Group. Can someone point out to me which of the ports are used for CMs retrieval of this informaiton, including whether this is the same for all service roles or different for each. Thanks.
... View more
Labels:
- Labels:
-
Cloudera Manager