Member since
12-21-2020
91
Posts
8
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1899 | 08-12-2021 05:16 AM | |
2125 | 06-29-2021 06:21 AM | |
2564 | 06-16-2021 07:15 AM | |
1795 | 06-14-2021 12:08 AM | |
6029 | 05-14-2021 06:03 AM |
06-16-2021
02:55 AM
Hi, How to configure the retention period for CDP INFRA SOLR? Couldn't find the options suggested in this post in Cloudera Manager. Thanks, Megh
... View more
06-16-2021
12:44 AM
Hello Everyone, I would like to understand how I can define the retention period for Ranger Audits in CDP Infra Solr. Ranger Audits are filling up the disk space on my nodes and I would like to configure a retention period for the same. Is there any configuration setting in CDP-INFRA-SOLR or Ranger Service in Cloudera Manager that can be configured? Thanks, Megh
... View more
Labels:
06-14-2021
12:08 AM
1 Kudo
Hi @roshanbi , How is the keytab generated? The keytabs can be generated using ktutil command. can you please explain the flow of authentication using Ranger? Ranger is used for authorization and not authentication. This happens through plugins such as, HDFS plugin, Hive Plugin, YARN plugin, Kafka Plugin, etc. e.g. for HDFS, the high level flow is something like this: Whenever an HDFS operation is received from an HDFS client, it is first authenticated with Kerberos to check whether the kerberos principal holds a valid ticket. After successful authentication, the request is forwarded to Ranger HDFS plugin to check whether there is a policy existing in Ranger to allow this principal to access the resource being requested. After the authorization is successful, namenode performs the requested operation. The role of Principal,tickets and authentication key? The principal is equivalent to a user Tickets are issued for a period of 8 hours so that users do not have to authenticate using a password for each individual request. Not sure what you mean by authentication key in this context. Thanks, Megh
... View more
06-13-2021
11:50 PM
Hi @dmharshit , It is difficult to comment without looking at the logs. Kindly share the log snippets from HiveServer2, Hive Metastore and YARN during the execution of this query. Thanks, Megh
... View more
06-10-2021
09:55 PM
1 Kudo
Hi @roshanbi , You can check with klist if the keytab file actually contains proper credentials. klist -kt /opt/striim/streamset.keytab if the output of this command shows "streamset/RB-HADOOP-03@INNOV.LOCAL" as principal, then the kinit command will refresh the ticket for this principal. By default, Kerberos tickets are valid for 8 hours, so you should schedule the kinit command to renew the ticket every 8 hours. Thanks, Megh
... View more
05-14-2021
06:08 AM
Hi @snm1523 , What are the permissions these new users have on default database? Thanks, Megh
... View more
05-14-2021
06:03 AM
Your command should ideally look like this: hadoop distcp -Dipc.client.fallback-to-simple-auth-allowed=true hdfs://svr2.localdomain:8020/tmp/distcp_test.txt webhdfs://svr1.local:50070/tmp/ Let me know how it goes. Thanks, Megh
... View more
05-14-2021
05:50 AM
Hi @vciampa , In addition to the solution suggested by @Tylenol , also use webhdfs instead of hdfs for your destination as EOFException seems to occur between different versions of Hadoop during distcp. Please paste your command and logs after trying this. Thanks, Megh
... View more
05-06-2021
01:18 AM
1 Kudo
Hi @Magudeswaran , Refer to this KB article. It is not supported to directly export and import transactional tables. You need to follow the workaround. Thanks, Megh
... View more
05-04-2021
10:12 PM
Ohh okay. In my environment, Downloads are restricted so I couldn't verify. Strangely, there is no overall tar.gz file for all the packages in centos7-ppc (like there is one for centos7). I think the best way forward for you would be to raise a support case with Cloudera to get this package. Thanks, Megh
... View more