Member since
01-08-2019
52
Posts
8
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3422 | 05-12-2019 06:21 PM | |
2535 | 05-12-2019 06:11 PM |
03-25-2019
11:46 AM
Thanks Akhil S Naik 🙂
... View more
03-24-2019
06:41 PM
3 Kudos
IMPORTANT: DO NOT FORGET to backup your Ambari database before executing DELETE API calls. This is tested on Ambari version 2.6.x & 2.7.1. Kindly use these steps on your Test cluster first OR Contact Cloudera Support for more details. Step 1: Take a backup of your Ambari database # mkdir /var/tmp/postgres_backup
# pg_dump -U ambari ambari > /var/tmp/postgres_backup/$(date +"%Y%m%d%H%M%S")_ambari-bkp.sql Step 2: Install jq package (If not available) # rpm -qa | grep jq
# yum whatprovides jq
# yum install jq -y Step 3: Get All (AD/LDAP) Users & Groups # curl --insecure -u admin:admin -H 'X-Requested-By: ambari' -X GET http://$(hostname -f):8080/api/v1/users?Users/ldap_user=true | jq -r '.items[].Users.user_name' > ambari-ldap-users.txt
# curl --insecure -u admin:admin -H 'X-Requested-By: ambari' -X GET http://$(hostname -f):8080/api/v1/groups?Groups/ldap_group=true | jq -r '.items[].Groups.group_name' > ambari-ldap-groups.txt Step 4: Verify Users & Groups that needs to be deleted. If you want to keep some users/groups from the list, you can remove those entries from respective txt files. # cat ambari-ldap-users.txt
# cat ambari-ldap-groups.txt Step 5: Remove all AD Users # for my_ldap_user in $(cat ambari-ldap-users.txt)
do
curl --insecure -u admin:admin -H 'X-Requested-By: ambari' -X DELETE 'http://'$(hostname -f)':8080/api/v1/users/'$my_ldap_user
echo 'deleting : ' $my_ldap_user
done Step 6: Remove all AD Groups # for my_ldap_group in $(cat ambari-ldap-groups.txt)
do
curl --insecure -u admin:admin -H 'X-Requested-By: ambari' -X DELETE 'http://'$(hostname -f)':8080/api/v1/groups/'$my_ldap_group
echo 'deleting : ' $my_ldap_group
done
... View more
Labels:
11-23-2018
02:41 PM
When using Ambari managed INFRA-SOLR, we can also change TTL value from Ambari webUI and that would be an easy solution. ## Change Retention/TTL value of the ranger_audits collection in Ambari UI Ambari UI->Ranger->configs->advanced->advanced ranger solr configuration ->Max Retention Days ## Save & Restart required services
... View more
09-17-2018
10:41 AM
@Sudharsan
Ganeshkumar The optimal number of mappers depends on many variables: you need to take into account your database type, the hardware that is used for your database server, and the impact to other requests that your database needs to serve. There is no optimal number of mappers that works for all scenarios. Instead, you’re encouraged to experiment to find the optimal degree of parallelism for your environment and use case. It’s a good idea to start with a small number of mappers, slowly ramping up, rather than to start with a large number of mappers, working your way down. When you run sqoop import with -m 1 option, 1 mapper will be launched and in case this parameter is specified, sqoop will run 4 mappers by default.
... View more
09-13-2018
05:49 PM
@Venkat Edulakanti Usually this indicates that the Account might be locked from the Active Directory. Please cross check with your AD team and you can test it from terminal by running kinit command. $ kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-xxx@xxx.LOCAL If output of kinit command give you same error Clients credentials have been revoked while getting initial credentials. It means this account is expired/locked/deleted.
... View more
09-05-2018
11:45 AM
security.json is avialble under /infra-solr znode. you can login to zkcli and verify the same. # /usr/hdp/current/zookeeper-client/bin/zkCli.sh
[zk: localhost:2181(CONNECTED) 0] ls /
[registry, cluster, controller, brokers, zookeeper, infra-solr, admin, isr_change_notification, hiveserver2, controller_epoch, druid, rmstore, ambari-metrics-cluster, consumers, config]
[zk: localhost:2181(CONNECTED) 1] ls /infra-solr
[configs, overseer, aliases.json, live_nodes, collections, overseer_elect, security.json, clusterstate.json, clusterprops.json]
you can also verify it through Solr web UI.
... View more
09-05-2018
11:13 AM
@Sagar Shimpi 1) Verify whether solr application is running. 2) If Solr is running and still you are facing the issue. You can run below command to delete ranger_audit collection from solr # curl -iv --negotiate -u: 'http://$SOLR_HOST:8886/solr/admin/collections?action=DELETE&name=ranger_audits' This command will delete ranger_audits collection data from solr. 3) Restart ranger service from ambari. This will create a new ranger_audit collection automatically
... View more
08-31-2018
03:08 PM
@Sudharsan
Ganeshkumar
Snapshots are stored in the same path under .snapshot directory. for example, If you take snapshot of /user/root, it would be stored in /user/root/.snapshot directory. An example is given below. [hdfs@sandbox ~]$ hdfs dfsadmin -allowSnapshot /user/root/testsnap
snapsAllowing snaphot on /user/root/testsnaps succeeded
[root@sandbox ~]# hdfs dfs -createSnapshot /user/root/testsnaps snap1
Created snapshot /user/root/testsnaps/.snapshot/snap1
[gulshad@sandbox ~]$ hdfs dfs -createSnapshot /user/gulshad
Created snapshot /user/gulshad/.snapshot/s20180831-145829.441 To get all Snapshotable directories, run below command. [root@sandbox ~]# sudo -su hdfs hdfs lsSnapshottableDir
drwxr-xr-x 0 root hdfs 0 2018-07-26 05:29 3 65536 /user/root/testsnaps
drwxr-xr-x 0 hdfs hdfs 0 2018-08-01 14:58 1 65536 /proj/testsnap
drwxr-xr-x 0 gulshad hdfs 0 2018-08-01 14:58 1 65536 /user/gulshad
... View more
08-30-2018
05:32 PM
No. @Sudharsan Ganeshkumar If you delete your files from Trash and there is no snapshot available for the same file, Your file will have no references in namenode metadata and file will be removed completely.
... View more
08-27-2018
01:45 PM
Your namenode is able to communicate with only 1 JN as per namenode logs Succeeded so far: [JOURNALNODE_IP:8485] Please check what's the problem with other two & you will be able to resolve it easily.
... View more
- « Previous
-
- 1
- 2
- Next »