Member since
05-09-2018
44
Posts
3
Kudos Received
0
Solutions
07-21-2021
01:46 AM
Hi, when i enabled this feature, hue is able to import all users from the LDAP group however sync_ldap_users_and_groups is trying to create same user again and its failing with duplicate key error. see my question at https://community.cloudera.com/t5/Support-Questions/hue-ldap-sync-duplicate-error/m-p/320951/highlight/true#M228224
... View more
03-16-2021
12:41 AM
Hello @sheshk11
Thanks for sharing your knowledge (Knowledge Article) on managing the DISABLING Table. As @tencentemr mentioned, It has been helpful. Few other details I wish to add:
1. Using the Link [1] HBCK setTableState to perform the same on HBase v2.x. The advantage of using the same is to ensure the manual intervention is avoided to avoid any unintended HBase Metadata manipulation.
2. In certain cases, the Regions belonging to the Table would be in Transition as well. If we are Disabling the Table, It's best to review the RegionState for the Table as well. Link [1] HBCK setRegionState can assist here.
As the Post is a KA, I shall mark the same as Resolved. Thank You for posting the same for assisting fellow Community Members.
- Smarak
[1] https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2
... View more
01-05-2020
06:57 AM
Hi, Did you tried disabling SPNEGO authentication in Configuration properties and tried restarting the service? Thanks AKR
... View more
11-06-2019
02:18 AM
Hi, Check Datanode logs of particular RS for the same time stamp. Looks like there could be issue with the mount point.
... View more
05-27-2019
06:04 PM
@Shesh Kumar I am happy this compilation has helped give you a better understanding If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors. Happy Hadooping
... View more
05-29-2019
01:14 AM
While it is true that one must "…be logged in and have sufficient reputation points to post a repo or an article", an HCC member does not have to be at the Guru level to create an Article. A member in good standing who has accumulated enough reputation points to reach the Rookie level should be able to submit an article to moderation. Your HCC moderation staff is looking into why @Shesh Kumar does not have access to the Create Article menu option. In the meantime, as of Wed May 29 01:13 UTC 2019 the original question and this thread has been moved to the Cloud & Operations track. The Community Help Track is intended for questions about using the HCC site itself.
... View more
11-21-2018
05:30 PM
Thank you so much Robert! Highly appreciate your views. I've one more doubt which I came across. It is about auto-renew of Kerberos ticket. As you know we have successfully integrated FreeIPA with Ambari cluster which also has IPA replication as well. I noticed that user's kerberos ticket is not auto-renewing even though they have a valid ticket. shesh.kumar@stg-ambarixenial001:~$ klist Ticket cache: FILE:/tmp/krb5cc_1193 Default principal: shesh.kumar@EXAMPLE.COM Valid starting Expires Service principal 11/18/18 18:15:37 11/19/18 18:15:34 krbtgt/EXAMPLE.COM@EXAMPLE.COM renew until 11/25/18 18:15:34 As you can see above, the ticket is not auto-renewing. How can I make sure that kerberos ticket is auto-renewed once the user executes the "kinit" command. Let me show you what I have done from my side. I've added these 3 lines in /etc/sssd/sssd.conf file which is present in FreeIPA server (which don't have Hadoop client). krb5_lifetime = 120s krb5_renewable_lifetime = 150m krb5_renew_interval = 10s Will this work? Thanks, Shesh Kumar
... View more
10-30-2018
07:47 AM
Thank you! Will surely check the recommendation next time.
... View more
08-10-2018
10:44 AM
This disables 'hadoop' command completely. Well i missed this in description. Restricting only chmod is not possible without implementing authentication/authorization AFAIK.
... View more
07-11-2018
02:59 PM
For Kafka, swap space is probably safe to clear (though I wouldn't), but you should avoid Kafka using swap space. If you look at disk IO on a Kafka broker node, it should be almost all writes, read should come from page cache. Kafka was designed to be the only tenant on a node and runs best that way. This is why you will find recommendations that say Kafka should not share nodes with Zookeeper or other Hadoop components. It is not always possible to dedicate machines to Kafka, so take a look at the disk IO when Kafka is running under normal load, if it is all writes, you can probably shrink the page cache a bit so you do less/no swapping. If there are lots of reads, you may need more memory or more nodes (unless you are deliberately and routinely reading topics from the beginning, in which case disk reads are unavoidable). Can't help you with the zookeeper, I've never had reason to dig into zookeeper's internals, it has always just worked.
... View more