Member since
10-06-2015
15
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1426 | 02-09-2021 12:39 PM |
10-23-2021
07:54 AM
Hello @Rjkoop As stated by @willx, Visibility Labels aren't supported & Will has shared the Link. As such, We are marking the Post as Solved. Having said that, Your Team may post any further concerns in the Post. We shall review & get back to your Team accordingly. Thanks for using Cloudera Community. Regards, Smarak
... View more
03-14-2021
01:16 AM
Hello @Rjkoop Thanks for posting the Update & confirming the Q has been resolved. In short, the Article requires us to set the 3 Configurations you specified ["hbase.security.exec.permission.checks", "hbase.security.access.early_out", "hfile.format.version"] along with enabling the "HBase Secure Authorization" (Mandatory for "HBase Cell-Level ACLs" enabling). Additionally, Link [1] documents the ACL functionality in detail as well. As the Post is Solved, I shall mark the same likewise as well. - Smarak [1] https://hbase.apache.org/book.html#hbase.accesscontrol.configuration
... View more
02-10-2021
12:25 PM
1 Kudo
To followup... Got this working today. Turns out it was caused by this setting... hadoop.security.group.mapping=org.apache.hadoop.security.ShellBasedUnixGroupsMapping Apparently this does a... bash -c groups ...for the user... which separates the groups by spaces. When I changed to this implementation... hadoop.security.group.mapping=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback ... everything worked correctly. Now hbase shell correctly lists the groups (even ones with spaces) and visibility labels works correctly. That was a fun one... NOT! Richard
... View more
02-03-2021
04:53 PM
One thing I noticed today in case it may help with this issue... Today I tried the sqoop from MSQL -> Hbase again on a new table with compression set and pre-split in Cloudera 5.15.1 and Cloudera 6.2.1 environments, Hbase configuration (and HDFS configuration for that matter) is almost identical. In the Cloudera 6.2.1 (ie. Hbase 2.1.2) environment I see the flush to the HStoreFile happen fairly quickly (only about 32,000 new entries) and in the logs it mentions 'Adding a new HDFS file' of size 321Kb. In the Cloudera 5.15.1 (ie. Hbase 1.2.x) environment I see the flush happen to the HStoreFile take longer and there are 700,000 entries being flush and the 'Adding a new HDFS file' is of size 6.5Mb. The memstore flush size is set to 128Mb in both environments and region servers have 24Gb available. So I think it's hitting the 0.4 heap factor for memstores and then it flushes in both cases. Also there are only a few tables with heavy writes so most of the other tables are fairly idle. So I don't think they would take up much memstore space. In the Cloudera 6.2.1 environment each server holds about 240 regions. In the Cloudera 5.15.1 environment each server holds about 120 regions. My thinking is that if I can get the Cloudera 6.2.1/hbase 2.1.2 memstore flush happening with a similar size and number of entries as the Cloudera 5.15.1 environment the performance issue for large writes would be solved. Just not sure how to make that happen. I also noticed that minor compactions happen in both environments take a similar amount of time so I think that's not an issue. Richard
... View more
01-24-2018
07:57 AM
Hi Suku: I response some of your questions: a) Which Keytab you have used, whether CM generated keytab or user keytab generated by you? I used kafka.keytab b) Path of your jaas.conf and keytab for Kafka? Path of kafka.keytab in /etc/security/keytabs/ c) How Kafka Kerberos configuration parameters set? The following is the configuration of Kafka parameters and the the form to use the jaas parameter. Properties props = new Properties(); props.put("bootstrap.servers", "xxxx:9092,xxx:9092"); props.put("client.id", "client-id-coprocessor "); props.put("key.serializer", StringSerializer.class.getName()); props.put("value.serializer", StringSerializer.class.getName()); props.put("security.protocol", "SASL_PLAINTEXT"); props.put("sasl.kerberos.service.name", "kafka"); props.put("sasl.jaas.config", "com.sun.security.auth.module.Krb5LoginModule required \n" + "useKeyTab=true \n" + "storeKey=true \n" + "keyTab=\"/etc/security/keytabs/kafka.keytab\" \n" + "principal=\"kafka/nodo@REALM\";"); KafkaProducer producer = new KafkaProducerString>(props); Remember sometimes you will need reboot your hbase service for deploy your coprocessor. I hope I will help you. Florentino
... View more