Member since
07-09-2019
210
Posts
65
Kudos Received
32
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
275 | 03-24-2022 03:20 AM | |
169 | 03-08-2022 11:56 PM | |
653 | 01-20-2022 08:25 PM | |
531 | 12-18-2021 08:42 PM | |
239 | 12-13-2021 07:09 AM |
06-04-2022
03:32 AM
1 Kudo
@learner94 You can make use of the capacity scheduler, Refer to the following docs for more info https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/yarn-allocate-resources/topics/yarn-allocating-resources-with-the-capacity-scheduler.html
... View more
05-12-2022
06:53 PM
Hi @gisbello Can you confirm is this issue started after the upgrade Can you share CDP/CDH and Cloudera manager version
... View more
04-04-2022
10:17 AM
Hi @yagoaparecidoti If your looking for YARN Resource Managemen, Refer to the following document https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.4/bk_yarn-resource-management/content/setting_user_limits.html
... View more
03-24-2022
03:54 AM
1 Kudo
@Sayed016 Not only knox whatever the service may be the Pam authentication requires Read permission on /etc/shadow file Refer to the below doc for more info https://www.redhat.com/sysadmin/pluggable-authentication-modules-pam
... View more
03-24-2022
03:20 AM
1 Kudo
@Sayed016 Can you check the permission on /etc/shadow file, make sure it has 444 permission
... View more
03-08-2022
11:56 PM
@HDP_Suja Can you check node mangar logs on the failed hosts, the error might be different
... View more
03-06-2022
08:09 AM
@Chetankumar Can you please share complete stack trace of the error Also confirm cdh/ hdp version
... View more
01-25-2022
03:42 AM
@HDP_Suja While accessing repo through the internet I can see 403 error, Make sure the paywall credentials are correct For the local repository, Check is the tarball that you downloaded contains all the required files, note that you need to pass the paywall credentials to download the tarball Refer the following link for more info https://www.cloudera.com/downloads/paywall-expansion.html
... View more
01-25-2022
01:35 AM
@HDP_Suja Can you share the error / error code Make sure the below document is followed correctly https://docs.cloudera.com/HDPDocuments/Ambari-2.7.1.0/bk_ambari-installation/content/setting_up_a_local_repository_with_temporary_internet_access.html
... View more
01-21-2022
09:37 PM
@Seaport For /api/interpreter/** = authc, roles[{{zeppelin_admin_group}}] you need to configure roles in shiro configuration, Refer to the following doc for more info https://zeppelin.apache.org/docs/0.8.0/setup/security/shiro_authentication.html#secure-your-zeppelin-information-optional https://community.cloudera.com/t5/Support-Questions/Zeppelin-user-role-mapping-using-Active-Directory/td-p/238681
... View more
01-20-2022
08:25 PM
@Seaport To provide access to interpreter page, comment/delete below line[1] from shiro configuration url section or configure roles as mentioned in doc[2] [1]: /api/interpreter/** = authc, roles[admin] [2]: https://zeppelin.apache.org/docs/0.6.2/security/shiroauthentication.html#active-directory
... View more
01-19-2022
07:58 AM
@Koffi Can you try restarting HBase service and they try restarting AMS
... View more
01-18-2022
06:36 PM
Yes, change it to /ams-hbase-secure1 or any other name, New znode with the mentioned name will get created once you restart AMS service
... View more
01-18-2022
06:21 PM
Can you try the below steps - Go to Ambari-->Ambari Metrics-->Configs-->Search for "hbase.tmp.dir" and check that value, it should be "/var/lib/ambari-metrics-collector/hbase-tmp".
- Now login to the Collector node and do "cd /var/lib/ambari-metrics-collector" (cd to “hbase.tmp.dir”).
Then take the backup of hbase-tmp directory by running "cp -pr hbase-tmp/ /tmp/hbase-tmp_bkp"
Now run this command to delete this hbase tmp directory " rm -rf hbase-tmp "
- Goto Ambari-->Ambari Metrics-->Configs and search for "zookeeper.znode.parent" it should be "/ams-hbase-unsecure". Now change this value to "/ams-hbase-unsecure1 " and save the configs. This will create a new znode.
... View more
01-18-2022
05:55 PM
Hello @Koffi I can see AMS is running on Distributed mode, Can you check and confirm is Hbase service is up and running fine in your cluster
... View more
12-21-2021
09:09 PM
Hi @jenne Can you share a complete stack trace of the error or ldap error code from the atlas application logs
... View more
12-18-2021
08:42 PM
1 Kudo
Hi @ebeb Activity Monitor is deprecated from 7.0.0 onwards , if there is no MapReduce V1 in use it can be safely removed from the cluster Refer to below document for more info https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_deprecated_items.html#concept_h25_5b2_rbb
... View more
12-13-2021
07:09 AM
1 Kudo
@syedshakir I see "rcmprod" user doesn't have permission to access the interpreter page To provide access to interpreter page, comment/delete below line[1] from shiro configuration url section or configure roles as mentioned in doc[2] [1]: /api/interpreter/** = authc, roles[admin] [2]: https://zeppelin.apache.org/docs/0.6.2/security/shiroauthentication.html#active-directory
... View more
12-01-2021
04:49 PM
@daba Can you try adding the below lines to the Knox topology files authentication.param.remove=main.pamRealm
authentication.param.remove=main.pamRealm.service Refer to the following doc for more info on how to configure LDAP/AD in knox https://docs.cloudera.com/runtime/7.2.10/knox-authentication/topics/security-knox-authe-ldap.html
... View more
10-24-2021
07:02 PM
@yacine_ Is Atlas initialization is done, if not can you try Initiazing Atlas - Navigate to Atlas from the Cloudera Manager UI - Under actions , the Initialize Atlas option appears. Initiate it; it will create the required resources. - Once done, verify the Altas UI.
... View more
10-16-2021
10:08 PM
1 Kudo
@PrernaU To access interpreter page you need to have access in shiro cofiguration , Make sure your user id added to the below line in shiro configuration or delete/ comment the below line in shiro configuration /api/interpreter/** = authc, roles[admin_role, test, test2] Refer below doc for more info https://docs.cloudera.com/cdp-private-cloud-base/7.1.6/configuring-zeppelin/topics/enabling_access_control_for_interpreter__configuration__and_credential_settings.html
... View more
09-07-2021
01:29 AM
1 Kudo
@HeathG Can you confirm were there any changes made on the cluster If your cluster is kerberized can you try increasing kdc_timeout value in /etc/krb5.conf and they try restarting the zookeeper
... View more
08-31-2021
11:54 PM
@wbivp You need to create /proxy policy for Nifi in ranger, Refer to below document for more info https://community.cloudera.com/t5/Community-Articles/NiFi-Ranger-based-policy-descriptions/ta-p/246586 https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/authorization-ranger/content/resource_policy_create_a_nifi_policy.html
... View more
08-31-2021
10:55 PM
1 Kudo
@KPG1 Ranger audits store in both HDFS and Solr. HDFS is used for long term , Solr will be used for short term storage. By using Solr you have data indexed and you can view it quickly from Ranger UI. Deleting older ranger audit from hdfs will not cause any issues to the service
... View more
08-22-2021
11:35 PM
@ajck Can you share the stack trace of the error
... View more
08-13-2021
09:44 PM
@Nil_kharat Ticket lifetime is set in kerberos configuration file krb5.conf in MIT kerberos, You can check the lifetime of the ticket using # klist command after doing kinit You can still specify the lifetime of the ticket using -l option as shown below # kinit -l 30m -kt <Keytab> <principal>
Example:
kinit -l 30m -kt sai.keytab sai@SUPPORTLAB.CLOUDERA.COM
... View more
08-13-2021
05:00 PM
@Nil_kharat To renew the Kerberos ticket, run kinit and specify both the keytab file and the principal: # kinit -kt <keytab> <Principal>
Example:
# kinit -kt user1.keytab user1@EXAMPLE.COM
... View more
07-26-2021
12:01 AM
1 Kudo
@SSRIL Can you keep only one OS repository link and remove other OS properties from the Repository page Also can check the repo under /etc/yum.repos.d/
... View more