Member since
09-02-2016
523
Posts
89
Kudos Received
42
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2310 | 08-28-2018 02:00 AM | |
2163 | 07-31-2018 06:55 AM | |
5078 | 07-26-2018 03:02 AM | |
2439 | 07-19-2018 02:30 AM | |
5873 | 05-21-2018 03:42 AM |
10-26-2018
11:24 AM
@DanielWhite I had the similar issue long back and below was my findings Please check the owner of HDFS folder/files for the corresponding db that you are trying to delete. if you are the owner and trying to delete the table/db from hive/impala, it will delete both metadata and hdfs file/folder. Whereas you are not the owner of hdfs folder/file but got an access in hive/impala to manage data and trying to delete it, it will just delete the metadata but not the underlined folder/files from hdfs pls try this with a sample db/table for more understanding
... View more
10-22-2018
06:43 PM
Thank you so munch! I change the group of '/tmp/logs' to hadoop , and restart the JobHistoryServer role, everything being OK. So happy !
... View more
10-15-2018
08:59 AM
EDIT: I did a copy/paste mistake! Please ignore my full-log given in my post above. Here is the correct Error-Log: /usr/share/cmf/bin/gen_credentials.sh failed with exit code 1 and output of <<
+ export PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/sbin:/usr/sbin:/bin:/usr/bin
+ PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/sbin:/usr/sbin:/bin:/usr/bin
+ CMF_REALM=MYCOMPANY.REALM
+ KEYTAB_OUT=/var/run/cloudera-scm-server/cmf2548823212650177196.keytab
+ PRINC=sdc/hostname.FQDN@MYCOMPANY.REALM
+ MAX_RENEW_LIFE=604800
+ KADMIN='kadmin -k -t /var/run/cloudera-scm-server/cmf6838080336847771087.keytab -p admin/admin@MYCOMPANY.REALM -r MYCOMPANY.REALM'
+ RENEW_ARG=
+ '[' 604800 -gt 0 ']'
+ RENEW_ARG='-maxrenewlife "604800 sec"'
+ '[' -z /var/run/cloudera-scm-server/krb52847952611766397096.conf ']'
+ echo 'Using custom config path '\''/var/run/cloudera-scm-server/krb52847952611766397096.conf'\'', contents below:'
+ cat /var/run/cloudera-scm-server/krb52847952611766397096.conf
+ kadmin -k -t /var/run/cloudera-scm-server/cmf6838080336847771087.keytab -p admin/admin@MYCOMPANY.REALM -r MYCOMPANY.REALM -q 'addprinc -maxrenewlife "604800 sec" -randkey sdc/hostname.FQDN@MYCOMPANY.REALM'
WARNING: no policy specified for sdc/hostname.FQDN@MYCOMPANY.REALM; defaulting to no policy
add_principal: Operation requires ``add'' privilege while creating "sdc/hostname.FQDN@MYCOMPANY.REALM".
+ '[' 604800 -gt 0 ']'
++ kadmin -k -t /var/run/cloudera-scm-server/cmf6838080336847771087.keytab -p admin/admin@MYCOMPANY.REALM -r MYCOMPANY.REALM -q 'getprinc -terse sdc/hostname.FQDN@MYCOMPANY.REALM'
++ tail -1
++ cut -f 12
get_principal: Operation requires ``get'' privilege while retrieving "sdc/hostname.FQDN@MYCOMPANY.REALM".
+ RENEW_LIFETIME='Authenticating as principal admin/admin@MYCOMPANY.REALM with keytab /var/run/cloudera-scm-server/cmf6838080336847771087.keytab.'
+ '[' Authenticating as principal admin/admin@MYCOMPANY.REALM with keytab /var/run/cloudera-scm-server/cmf6838080336847771087.keytab. -eq 0 ']'
/usr/share/cmf/bin/gen_credentials.sh: line 35: [: too many arguments
+ kadmin -k -t /var/run/cloudera-scm-server/cmf6838080336847771087.keytab -p admin/admin@MYCOMPANY.REALM -r MYCOMPANY.REALM -q 'xst -k /var/run/cloudera-scm-server/cmf2548823212650177196.keytab sdc/hostname.FQDN@MYCOMPANY.REALM'
kadmin: Operation requires ``change-password'' privilege while changing sdc/hostname.FQDN@MYCOMPANY.REALM's key
+ chmod 600 /var/run/cloudera-scm-server/cmf2548823212650177196.keytab
chmod: cannot access ‘/var/run/cloudera-scm-server/cmf2548823212650177196.keytab’: No such file or directory
>>
... View more
10-05-2018
07:47 AM
awesome! thank you!
... View more
09-25-2018
07:45 PM
@mdjedaini There is nothing to do with cloudera on this request as there are so many other tools are available in the market. I am not sure how big your environment. In general, those who are using big environments with huge nodes will use some tools like Chef, Puppet, Terraform, Ansible, etc to achieve your requirement (for cloud there are another different set of tools like Cloudformation, etc) In high level, you can divide them into two category: Push and Pull based a. Tools like Puppet and Chef are pull based. Agent/Client on the server periodically checks for the configuration information from central server(master) b. Ansible is Push based. Central server pushes the configuration information on target servers. You control when the changes are made on the servers
... View more
09-11-2018
06:51 PM
Please check the link https://hortonworks.com/blog/update-hive-tables-easy-way/ hope this helps.
... View more
09-10-2018
09:36 PM
@Harsh J yeap with <property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property> It working perfect now Thanks you very much
... View more
08-29-2018
03:29 AM
@Matt_ I can give you two easy steps , it may reduce your burden 1. To list the valid kerberos principal
$ cd /var/run/cloudera-scm-agent/process/<pid>-hdfs-DATANODE
$ klist -kt hdfs.keytab
## The klist command will list the valid kerbros principal in the following format "hdfs/<NODE_FQDN>@<OUR_REALM>"
2. to kinit with the aboev listed full path
$ kinit -kt hdfs.keytab <copy paste the any one of the hdfs principal from the above klist>
... View more
08-28-2018
02:00 AM
1 Kudo
@AWT If you have your data is in hdfs and If your CM version is same in all your cluster/environment (if you are using different CM Login), then the easy way is ClouderaManager -> Backup(menu) -> Peers -> Add Peer ClouderaManager -> Backup(menu) -> Replication Schedules -> Create schedule or you can use distcp
... View more