Member since
04-22-2016
931
Posts
46
Kudos Received
26
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1849 | 10-11-2018 01:38 AM | |
| 2213 | 09-26-2018 02:24 AM | |
| 2242 | 06-29-2018 02:35 PM | |
| 2910 | 06-29-2018 02:34 PM | |
| 6091 | 06-20-2018 04:30 PM |
12-24-2016
03:20 AM
ok I reset the KDC credentials via the Manage KDC credentials button and I entered principal as admin/admin. now I am not getting locked out but if I try to reinstall KMS I am getting t he error below .. he is not finding the krb5.conf file . is there a setting in Kerberos which is messed up ? since as I stated earlier the TGT system is working fine on the command level for HIVE n HBASE ? ... 103 more
23 Dec 2016 22:16:33,131 WARN [ambari-client-thread-837] ServletHandler:561 - Error Processing URI: /api/v1/clusters/FDOT_Hadoop/hosts/hadoop1.abc.com/host_components/RANGER_KMS_SERVER - (java.lang.RuntimeException) Update Host request submission failed: org.apache.ambari.server.AmbariException: The 'krb5-conf' configuration is not available
23 Dec 2016 22:16:33,131 WARN [ambari-client-thread-837] ServletHandler:561 - Error Processing URI: /api/v1/clusters/FDOT_Hadoop/hosts/hadoop1.abc.com/host_components/RANGER_KMS_SERVER - (java.lang.RuntimeException) Update Host request submission failed: org.apache.ambari.server.AmbariException: The 'krb5-conf' configuration is not available
... View more
12-24-2016
03:24 AM
I reset the KDC credentials via the "Manage KDC credentials" button in Kerberos menu and now Iam getting a slightly different error when I try to reinstall Ranger KMS my TGT system is working fine for HIVE n HBASE so why ranger KMS cant find the krb5.conf file . .is there a setting in the KMS service for this that might be wrong ? ... 103 more
23 Dec 2016 22:16:33,131 WARN [ambari-client-thread-837] ServletHandler:561 - Error Processing URI: /api/v1/clusters/FDOT_Hadoop/hosts/hadoop1.abc.com/host_components/RANGER_KMS_SERVER - (java.lang.RuntimeException) Update Host request submission failed: org.apache.ambari.server.AmbariException: The 'krb5-conf' configuration is not available
23 Dec 2016 22:16:33,131 WARN [ambari-client-thread-837] ServletHandler:561 - Error Processing URI: /api/v1/clusters/FDOT_Hadoop/hosts/hadoop1.abc.com/host_components/RANGER_KMS_SERVER - (java.lang.RuntimeException) Update Host request submission failed: org.apache.ambari.server.AmbariException: The 'krb5-conf' configuration is not available
... View more
12-26-2016
02:03 PM
Hi
@Sami Ahmad, It isn't the krb5.conf file that is corrupt but more the information that Ambari has in the database to manage your krb5.conf file. From what I am seeing above there isn't a configuration version selected and therefore Ambari is unable to find the configuration data. In my cluster I have a version selected for each which should be the last version. Here is what mine looks like. Notice the latest selected versions.
ambari=> select * from clusterconfigmapping where type_name = 'krb5-conf' or type_name = 'kerberos-env' order by version_tag desc;
cluster_id | type_name | version_tag | create_timestamp | selected | user_name
------------+--------------+----------------------+------------------+----------+-----------
2 | krb5-conf | version1478018911089 | 1478018910394 | 1 | admin
2 | kerberos-env | version1478018911089 | 1478018910391 | 1 | admin
2 | kerberos-env | version1477959455789 | 1477959455113 | 0 | admin
2 | krb5-conf | version1477959455789 | 1477959455120 | 0 | admin
2 | kerberos-env | version1477959390268 | 1477959389823 | 0 | admin
2 | krb5-conf | version1477959390268 | 1477959389814 | 0 | admin
2 | krb5-conf | version1477956530144 | 1477956529438 | 0 | admin
2 | kerberos-env | version1477956530144 | 1477956529436 | 0 | admin
2 | krb5-conf | version1477687536774 | 1477687536111 | 0 | admin
2 | kerberos-env | version1477687536774 | 1477687536113 | 0 | admin
2 | krb5-conf | version1 | 1477680416621 | 0 | admin
2 | kerberos-env | version1 | 1477680416662 | 0 | admin
(12 rows)
This command will show me what Ambari thinks my latest version is and the content.
[root@chupa1 /]# /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin get localhost myclustername krb5-conf
USERID=admin
PASSWORD=admin
########## Performing 'GET' on (Site:krb5-conf, Tag:version1478018911089)
"properties" : {
"conf_dir" : "/etc",
"content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable= true\n default_realm = {{realm|upper()}}\n ticket_lifetime = 48h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes ={{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm|upper()}}\n{% endfor %}\n{%endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\nadmin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = chupa1.openstacklocal\n }\n\n{# Append additional realm declarations below dav#}",
"domains" : "",
"manage_krb5_conf" : "true"
}
... View more
01-03-2017
10:59 AM
@Sami Ahmad Just to update and follow up on this issue: https://issues.apache.org/jira/browse/AMBARI-19287 has been fixed in the current version being developed for ambari (2.5.0) and so similar confusion will not happen in the future. Thanks for bringing this issue to our notice!
... View more
12-17-2016
03:43 PM
8 Kudos
Hello @Sami Ahmad, Keeping the jargon aside -
Ranger is used for deciding who can access what resources on a Hadoop cluster with the help of policies (there is more to this but this is in the most basic terms). Knox can be imagined as the gatekeeper which decides whether to allow user access to Hadoop cluster or not. More complete definitions:
Ranger is an authorization system which allows / denies access to Hadoop cluster resources (HDFS files, Hive tables etc.) based on pre-defined Ranger policies. When user request comes to Ranger, it is assumed to be authenticated already. Knox is a REST API based perimeter security gateway system which 'authenticates' user credentials (mostly against AD/LDAP). Only the successfully authenticated user are allowed access to Hadoop cluster. Knox also provides a layer of abstraction to the underneath Hadoop services i.e. all endpoints are accessed via Knox gateway URL. Follow Apache Ranger project and Apache Knox project for more comprehensive description and full set of feature list. Hope this helps !
... View more
12-18-2016
08:15 AM
If you know that all rows of the table have same no. of columns then you can just get first row (with scan and limit) and parse the columns names for each column family. otherwise @Sergey Soldatov answer is the only way.
... View more
12-17-2016
02:21 AM
You import into hbase column family one at a time. if you have 2 column families you import into each with two sqoop run. if you have 3 CF then it takes 3 sqoop runs. Here is good examples from here $ sqoop import
--connect jdbc:mysql://localhost/serviceorderdb
--username root -P
--table customercontactinfo
--columns "customernum,customername"
--hbase-table customercontactinfo
--column-family CustomerName
--hbase-row-key customernum -m 1
Enter password:
...
13/08/17 16:53:01 INFO mapreduce.ImportJobBase: Retrieved 5 records.
$ sqoop import
--connect jdbc:mysql://localhost/serviceorderdb
--username root -P
--table customercontactinfo
--columns "customernum,contactinfo"
--hbase-table customercontactinfo
--column-family ContactInfo
--hbase-row-key customernum -m 1
Enter password:
...
13/08/17 17:00:59 INFO mapreduce.ImportJobBase: Retrieved 5 records.
$ sqoop import
--connect jdbc:mysql://localhost/serviceorderdb
--username root -P
--table customercontactinfo
--columns "customernum,productnums"
--hbase-table customercontactinfo
--column-family ProductNums
--hbase-row-key customernum -m 1
Enter password:
...
13/08/17 17:05:54 INFO mapreduce.ImportJobBase: Retrieved 5 records.
... View more
09-21-2017
11:20 PM
I looked at the other blog and compared it and found the difference.
After the new operation, it can be operated HBase SQL statement fails with Insufficient permissions for user [root@m1 ~]# klist -ket /etc/security/keytabs/hbase.service.keytab
Keytab name: FILE:/etc/security/keytabs/hbase.service.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
1 11/16/2016 13:50:24 hbase/m1.node.hadoop@TENDATA.CN (des-cbc-md5)
1 11/16/2016 13:50:24 hbase/m1.node.hadoop@TENDATA.CN (des3-cbc-sha1)
1 11/16/2016 13:50:24 hbase/m1.node.hadoop@TENDATA.CN (arcfour-hmac)
1 11/16/2016 13:50:24 hbase/m1.node.hadoop@TENDATA.CN (aes256-cts-hmac-sha1-96)
1 11/16/2016 13:50:24 hbase/m1.node.hadoop@TENDATA.CN (aes128-cts-hmac-sha1-96)
[root@m1 ~]# kinit -kt /etc/security/keytabs/hbase.service.keytab hbase/m1.node.hadoop@TENDATA.CN
[root@m1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hbase/m1.node.hadoop@TENDATA.CN
Valid starting Expires Service principal
09/20/2017 16:23:53 09/21/2017 16:23:53 krbtgt/TENDATA.CN@TENDATA.CN
[root@m1 ~]#
[root@m1 ~]# hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.1.2.2.5.3.0-37, rcb8c969d1089f1a34e9df11b6eeb96e69bcf878d, Tue Nov 29 18:48:22 UTC 2016
hbase(main):001:0>
hbase(main):002:0*
hbase(main):003:0* create 't1', 'f1'
0 row(s) in 2.5960 seconds
=> Hbase::Table - t1
hbase(main):004:0> list
TABLE
t1
1 row(s) in 0.0200 seconds
=> ["t1"]
hbase(main):005:0>
The key is to get the KGT without domain and result in failure kinit -kt /etc/security/keytabs/hbase.service.keytab hbase/m1.node.hadoop@TENDATA.CN
... View more
12-08-2016
06:27 PM
@Sami Ahmad because this version of the command uses the keytab. With Keberos, access to the keytab file is equivalent to knowledge of the password. Please see https://web.mit.edu/kerberos/krb5-1.12/doc/basic/keytab_def.html Please accept this answer if it was helpful in resolving your issue.
... View more
12-07-2016
09:58 PM
1 Kudo
ah it needed an account on the hadoop2 server since hiveserver2 is running there. I created 'sami' on hadoop2 and added it to the hadoop group and then I can use hive using my ticket.
... View more