Member since
10-20-2015
92
Posts
78
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
944 | 06-25-2018 04:01 PM | |
1710 | 05-09-2018 05:36 PM | |
473 | 03-16-2018 04:11 PM | |
2352 | 05-18-2017 12:42 PM | |
1870 | 03-28-2017 06:42 PM |
12-26-2016
03:56 PM
2 Kudos
For the hive cli issue your stack trace is showing me you need to configure Yarn. It looks like the default queue doesn't have any capacity. Give the default queue 10% and it should allow hive to start this way. As far as hive view giving you only a blank screen do you see anything in the ambari view logs that may give us more information?
... View more
12-26-2016
03:24 PM
I would explore this some more. Are you running ambari agent as non-root user if so you may be running into a documentation issue with the sudoer configuration. See if this fixes it. You will need to add the following commands to the sudoers file: /usr/lib/ambari-infra-solr/bin/solr *, /bin/su infra-solr *, /bin/su logsearch *,
/usr/lib/ambari-logsearch-logfeeder/run.sh *, /usr/sbin/ambari-metrics-grafana *,
/usr/lib/ambari-infra-solr-client/solrCloudCli.sh *
... View more
12-26-2016
02:59 PM
1 Kudo
Hi @hdpadmin overlandpark Knox's current method for authenticating all users is to configure shiro provider for LDAP authentication. This configured topology provides the users and groups that the ranger plugin will need to authorize against. 1. Knox by defaults handles all kerberos for you out of the box when you install the service. The knox user will proxy as the user you have authenticated with. (Make sure you setup your knox proxy settings in core-site.xml hadoop configuration so that knox can impersonate incoming users) 2. The knox ranger plugin handles this from the knox ldap topology configuration you have setup providing it with the user and group information. If the user and group information is not setup correctly you will have ranger issues such as the following. https://community.hortonworks.com/articles/38348/ranger-is-not-allowing-access-to-knox-resources-wh.html 3. If you are configuring for AD I would recommend that you use the following template in your knox topology and fill it out according to your environment http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/example_ad_configuration.html
Notice the comments in the xml in the link above shows the proper parameters for configuring user/group information. <!-- AD groups of users to allow -->
<param>
<name>main.ldapRealm.searchBase</name>
<value>ou=CorpUsers,dc=lab,dc=hortonworks,dc=net</value>
</param>
<param>
<name>main.ldapRealm.userObjectClass</name>
<value>person</value>
</param>
<param>
<name>main.ldapRealm.userSearchAttributeName</name>
<value>sAMAccountName</value>
</param>
<!-- changes needed for group sync-->
<param>
<name>main.ldapRealm.authorizationEnabled</name>
<value>true</value>
</param>
<param>
<name>main.ldapRealm.groupSearchBase</name>
<value>ou=CorpUsers,dc=lab,dc=hortonworks,dc=net</value>
</param>
<param>
<name>main.ldapRealm.groupObjectClass</name>
<value>group</value>
</param>
<param>
<name>main.ldapRealm.groupIdAttribute</name>
<value>cn</value>
</param>
... View more
12-26-2016
02:03 PM
Hi
@Sami Ahmad, It isn't the krb5.conf file that is corrupt but more the information that Ambari has in the database to manage your krb5.conf file. From what I am seeing above there isn't a configuration version selected and therefore Ambari is unable to find the configuration data. In my cluster I have a version selected for each which should be the last version. Here is what mine looks like. Notice the latest selected versions.
ambari=> select * from clusterconfigmapping where type_name = 'krb5-conf' or type_name = 'kerberos-env' order by version_tag desc;
cluster_id | type_name | version_tag | create_timestamp | selected | user_name
------------+--------------+----------------------+------------------+----------+-----------
2 | krb5-conf | version1478018911089 | 1478018910394 | 1 | admin
2 | kerberos-env | version1478018911089 | 1478018910391 | 1 | admin
2 | kerberos-env | version1477959455789 | 1477959455113 | 0 | admin
2 | krb5-conf | version1477959455789 | 1477959455120 | 0 | admin
2 | kerberos-env | version1477959390268 | 1477959389823 | 0 | admin
2 | krb5-conf | version1477959390268 | 1477959389814 | 0 | admin
2 | krb5-conf | version1477956530144 | 1477956529438 | 0 | admin
2 | kerberos-env | version1477956530144 | 1477956529436 | 0 | admin
2 | krb5-conf | version1477687536774 | 1477687536111 | 0 | admin
2 | kerberos-env | version1477687536774 | 1477687536113 | 0 | admin
2 | krb5-conf | version1 | 1477680416621 | 0 | admin
2 | kerberos-env | version1 | 1477680416662 | 0 | admin
(12 rows)
This command will show me what Ambari thinks my latest version is and the content.
[root@chupa1 /]# /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin get localhost myclustername krb5-conf
USERID=admin
PASSWORD=admin
########## Performing 'GET' on (Site:krb5-conf, Tag:version1478018911089)
"properties" : {
"conf_dir" : "/etc",
"content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable= true\n default_realm = {{realm|upper()}}\n ticket_lifetime = 48h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes ={{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm|upper()}}\n{% endfor %}\n{%endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\nadmin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = chupa1.openstacklocal\n }\n\n{# Append additional realm declarations below dav#}",
"domains" : "",
"manage_krb5_conf" : "true"
}
... View more
12-26-2016
01:11 PM
Looks like you have another thread going on this. https://community.hortonworks.com/questions/74041/ranger-kms-install-failing-1.html
... View more
12-26-2016
01:06 PM
One way is to use configs.sh [root@chupa1 /]# /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin get localhost myclustername krb5-conf
USERID=admin
PASSWORD=admin
########## Performing 'GET' on (Site:krb5-conf, Tag:version1478018911089)
"properties" : {
"conf_dir" : "/etc",
"content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable= true\n default_realm = {{realm|upper()}}\n ticket_lifetime = 48h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes ={{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm|upper()}}\n{% endfor %}\n{%endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\nadmin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = chupa1.openstacklocal\n }\n\n{# Append additional realm declarations below dav#}",
"domains" : "",
"manage_krb5_conf" : "true"
} [root@chupa1 /]# /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin get localhost myclustername kerberos-env
USERID=admin
PASSWORD=admin
########## Performing 'GET' on (Site:kerberos-env, Tag:version1478018911089)
"properties" : {
"admin_server_host" : "chupa1.openstacklocal",
"case_insensitive_username_rules" : "false",
"encryption_types" : "aes des3-cbc-sha1 rc4 des-cbc-md5",
"executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
"install_packages" : "true",
"kdc_host" : "chupa1.openstacklocal",
"kdc_type" : "mit-kdc",
"manage_identities" : "true",
"password_length" : "20",
"password_min_digits" : "1",
"password_min_lowercase_letters" : "1",
"password_min_punctuation" : "1",
"password_min_uppercase_letters" : "1",
"password_min_whitespace" : "0",
"realm" : "CHUPA.COM",
"service_check_principal_name" : "-"
}
... View more
12-25-2016
08:09 PM
SYMPTOM: Knox logs are filling up disk space ROOT CAUSE:
Kerberos debug is turned on by default causing the gateway.out file to grow rapidly. RESOLUTION:
To turn off kerberos debug logging.
1. Go to Ambari. KNOX -> Configs-> Advanced gateway-site
2. Change parameter sun.security.krb5.debug from true to false.
3. Restart Knox.
... View more
- Find more articles tagged with:
- Issue Resolution
- issue-resolution
- Knox
- knox-gateway
- Security
Labels:
12-25-2016
08:06 PM
SYMPTOM:
Hbase is giving an error KeyValue size too large when inserting large data sizes. ERROR: 0java.lang.IllegalArgumentException: KeyValue size too large
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1521)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.validatePut(BufferedMutatorImpl.java:147)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:134)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:105)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1050)
at org.apache.hadoop.hbase.rest.RowResource.update(RowResource.java:229)
at org.apache.hadoop.hbase.rest.RowResource.put(RowResource.java:318)
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134)
at ROOT CAUSE:
hbase.client.keyvalue.maxsize is set too low RESOLUTION:
Set hbase.client.keyvalue.maxsize=0 This will allow the client key value to be allowed. Just be careful with this as too large of a keyvalue >1-2gb could have performance implications.
... View more
- Find more articles tagged with:
- HBase
- Issue Resolution
- issue-resolution
Labels:
12-25-2016
08:00 PM
3 Kudos
Below are some examples of how you would achieve this:
Case1: Restrict to users in a single group – In this example, only the users who are members of “scientist” group are only allowed to login to ranger admin.
User search filter parameter would look something like this:
(&(sAMAccountName={0})(memberof=cn=scientist,ou=groups,dc=hwqe,dc=hortonworks,dc=com))
Case2: Restrict to users in multiple groups – In this example, only the users who are members of either “scientist” group OR “analyst” group are allowed to login to ranger admin.
User search filter parameter would look something like this:
(&(sAMAccountName={0})(|((memberof=cn=scientist,ou=groups,dc=hwqe,dc=hortonworks,dc=com)(memberof=cn=analyst,ou=groups,dc=hwqe,dc=hortonworks,dc=com))))
Case3: Restrict to given list of users – In this example, only the users whose cn (or common name) starts with “sam r” are allowed to login to ranger admin.
User search filter parameter would look something like this:
(&(sAMAccountName={0})(cn=sam r*))
... View more
- Find more articles tagged with:
- How-ToTutorial
- LDAP
- Ranger
- ranger-admin
- ranger-ldap
- Security
Labels:
12-25-2016
07:48 PM
Problem:
Time to live for collection data was never set on Solr Cloud server and caused the disk and nodes to fill up with too many documents due to ranger audit.
Solution:
1. Delete the collection through Solr api because the ranger archive was stored in HDFS anyways. (The following cleared up disk space)
http://<solr_host>:<solr_port>/solr/admin/collections?action=DELETE&name=collection
2. Set the configuration in solr to have a time to live.
a. Download each of the following configs from zookeeper: schema.xml, solrconfig.xml, and managed-schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/solrconfig.xml >/tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/schema.xml >/tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/managed-schema >/tmp/managed-schema
b. Added the following to solrconfig
<updateRequestProcessorChain name="add-unknown-fields-to-the-schema">
<processor class="solr.DefaultValueUpdateProcessorFactory">
<str name="fieldName">_ttl_</str>
<str name="value">+90DAYS</str>
</processor>
<processor class="solr.processor.DocExpirationUpdateProcessorFactory">
<int name="autoDeletePeriodSeconds">86400</int>
<str name="ttlFieldName">_ttl_</str>
<str name="expirationFieldName">_expire_at_</str>
</processor>
<processor class="solr.FirstFieldValueUpdateProcessorFactory">
<str name="fieldName">_expire_at_</str>
</processor>
c. Added the following to schema.xml and managed-schema.xm
<field name="_expire_at_" type="tdate" multiValued="false" stored="true" docValues="true"/>
<field name="_ttl_" type="string" multiValued="false" indexed="true" stored="true"/>
d. Uploaded each edited file to zookeeper.
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/solrconfig.xml /tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/schema.xml /tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/managed-schema /tmp/managed-schema
3.Make sure on each node the ranger_audit replicas were removed from the solr directories in the local filesystem.
4. Lastly we issue the create command
http://<host>:8983/solr/admin/collections?action=CREATE&name=ranger_audits&collection.configName=ranger_audits&numShards=2
MORE INFO:
Check out more Solr General info and Ambari-infra TTL info here: https://community.hortonworks.com/articles/63853/solr-ttl-auto-purging-solr-documents-ranger-audits.html
... View more
- Find more articles tagged with:
- disk
- Issue Resolution
- Ranger
- ranger_audit
- Security
- solrcloud
Labels:
12-25-2016
07:30 PM
Also, see https://issues.apache.org/jira/browse/KNOX-762
... View more
12-25-2016
07:28 PM
httpclient-451jar.zip
... View more
12-25-2016
07:24 PM
ERRORS: from HDFS log: 016-10-30 17:44:04,226 ERROR impl.CloudSolrClient (CloudSolrClient.java:requestWithRetryOnStaleState(903)) - Request to collection ranger_audits failed due to (401) org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://hwx.com:8886/solr/ranger_audits_shard1_replica1: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
</head>
<body><h2>HTTP ERROR 401</h2>
<p>Problem accessing /solr/ranger_audits_shard1_replica1/update. Reason:
<pre> Authentication required</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
From Hive log: 016-10-30 17:50:46,189 WARN [org.apache.ranger.audit.queue.AuditBatchQueue1]: provider.BaseAuditHandler (BaseAuditHandler.java:logFailedEvent(374)) - failed
to log audit event: {"repoType":3,"repo":"AA_Prod_hive","reqUser":"alex","evtTime":"2016-10-30 17:50:43.587","access":"USE","resource":"default","resType":"@
database","action":"_any","result":0,"policy":-1,"enforcer":"ranger-acl","sess":"cf4d0c81-c4df-483b-ab51-aa7bb5cb1633","cliType":"HIVESERVER2","cliIP":"172.26
.205.88","reqData":"show tables","agentHost":"hwx.com","logType":"RangerAudit","id":"d41d25ee-d198-475d-a288-11d6cc76535c-0","seq_num":1
,"event_count":1,"event_dur_ms":0,"tags":[],"additional_info":"{\"remote-ip-address\":172.26.1.1, \"forwarded-ip-addresses\":[]"}
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://hwx:8886/solr/ranger_audits_shard1_re
plica1: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 401 Authentication required</title>
Debug:
Once DEBUG log for krb5 is enabled (-Dsun.security.krb5.debug=true) we can see in both hdfs (log is hadoop-hdfs-namenode-hwx.com.out) and hive (log is hive-server2.out) the same issue as from the Knox ticket (it tries to use the HTTPS/_HOST principal instead of HTTP/_HOST as it's standard with spnego):
>>KRBError:
sTime is Sun Oct 30 17:50:08 PDT 2016 1477875008000
suSec is 518135
error code is 7
error Message is Server not found in Kerberos database
sname is HTTPS/host@HWX.COM
msgType is 30
ROOT CAUSE:
There is a defect in httpclient 4.5.2 that got introduced in HDP 2.5.
WORKAROUND:
Downgrade all the httpclient at 4.5.2 for ranger to 4.5.1 This will be fixed in a future release.
... View more
Labels:
12-23-2016
12:52 AM
1 Kudo
This doesn't look to be ranger KMS related. This looks like an issue with Ambari and possibly what it has stored in the ambari database for krb5-conf to update the krb5.conf files on hosts. You may want to check your database or data that you have for krb5-env and krb-conf using ambari API. Should look similar to the following: [
{
"Clusters": {
"desired_config": {
"type": "krb5-conf",
"tag": "version1234",
"properties": {
"domains":"",
"manage_krb5_conf": "true",
"conf_dir":"/etc",
"content" : "[libdefaults]\n renew_lifetime = 7d\n forwardable= true\n default_realm = {{realm|upper()}}\n ticket_lifetime = 24h\n dns_lookup_realm = false\n dns_lookup_kdc = false\n #default_tgs_enctypes = {{encryption_types}}\n #default_tkt_enctypes ={{encryption_types}}\n\n{% if domains %}\n[domain_realm]\n{% for domain in domains.split(',') %}\n {{domain}} = {{realm|upper()}}\n{% endfor %}\n{%endif %}\n\n[logging]\n default = FILE:/var/log/krb5kdc.log\nadmin_server = FILE:/var/log/kadmind.log\n kdc = FILE:/var/log/krb5kdc.log\n\n[realms]\n {{realm}} = {\n admin_server = {{admin_server_host|default(kdc_host, True)}}\n kdc = {{kdc_host}}\n }\n\n{# Append additional realm declarations below #}\n"
}
}
}
},
{
"Clusters": {
"desired_config": {
"type": "kerberos-env",
"tag": "version1234",
"properties": {
"kdc_type": "mit-kdc",
"manage_identities": "false",
"install_packages": "true",
"encryption_types": "aes des3-cbc-sha1 rc4 des-cbc-md5",
"realm" : "EXAMPLE.COM",
"kdc_host" : "hdc.host",
"admin_server_host" : "kadmin.host",
"executable_search_paths" : "/usr/bin, /usr/kerberos/bin, /usr/sbin, /usr/lib/mit/bin, /usr/lib/mit/sbin",
"password_length": "20",
"password_min_lowercase_letters": "1",
"password_min_uppercase_letters": "1",
"password_min_digits": "1",
"password_min_punctuation": "1",
"password_min_whitespace": "0",
"service_check_principal_name" : "${cluster_name}-${short_date}",
"case_insensitive_username_rules" : "false"
}
}
}
}
]
... View more
12-22-2016
09:49 PM
However, in Ambari 2.4.x and up it should create the principal and keytabs automatically. I have seen where this didn't happen prior to 2.4.2 on 2.4.0.1 and 2.4.1
... View more
12-22-2016
09:01 PM
In 2.4.2 you have to manually setup Ambari principal and keytab https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_Security_Guide/content/_set_up_kerberos_for_ambari_server.html I see the same documentation for 2.5.3. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_security/content/_set_up_kerberos_for_ambari_server.html
... View more
12-20-2016
05:26 PM
Looks like we currently don't have this in our docs yet. Per engineering, I will file a documentation bug. https://issues.apache.org/jira/browse/HDFS-6261
... View more
12-16-2016
06:50 PM
If you are not using ranger hbase policies to grant permission then you will have to use hbase shell to grant the permission. Forexample,
R - represents read privilege.
W - represents write privilege.
X - represents execute privilege.
C - represents create privilege.
A - represents admin privilege.
hbase(main):018:0> grant 'sami','RWXCA','default'
... View more
12-13-2016
04:34 PM
Like integration with db visualizer, squirrel, etc.
... View more
Labels:
12-13-2016
12:28 AM
Hi @Rob Ketcherside, Webhdfs is required with Ambari files views. I believe this is the case because ambari views can't be sure if HDFS client is installed on the node and as far as I have always known views uses webhdfs to communicate.
... View more
12-10-2016
02:08 AM
1 Kudo
Hi @Monika Garg, Are you sure you have the correct passwords in all of your ranger db passwords? Also, Check the ranger xa_portal.log and usersync.log for any possible clues. One thing I have seen lockouts before is you must have similar to (sAMAccountName={0}) or (UID={0}) for a search filter for ldap/AD settings as usersync uses this setting.
... View more
12-07-2016
11:25 PM
1 Kudo
Hi Sami, Since the above command requires superuser privilege I would do it like this. To your example, [root@chupa1 ~]# sudo su - hdfs
hdfs@chupa1 ~]$ klist -kt /etc/security/keytabs/hdfs.headless.keytab
Keytab name: FILE:/etc/security/keytabs/hdfs.headless.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
3 12/05/16 17:05:04 hdfs-chupa@CHUPA.COM
3 12/05/16 17:05:04 hdfs-chupa@CHUPA.COM
3 12/05/16 17:05:04 hdfs-chupa@CHUPA.COM
3 12/05/16 17:05:04 hdfs-chupa@CHUPA.COM
3 12/05/16 17:05:04 hdfs-chupa@CHUPA.COM
[hdfs@chupa1 ~]$ kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-chupa@CHUPA.COM
[hdfs@chupa1 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_503
Default principal: hdfs-chupa@CHUPA.COM
Valid starting Expires Service principal
12/07/16 22:47:23 12/08/16 22:47:23 krbtgt/CHUPA.COM@CHUPA.COM
renew until 12/07/16 22:47:23
hdfs@chupa1 ~]$ hdfs balancer -threshold 1
16/12/07 22:47:47 INFO balancer.Balancer: Using a threshold of 1.0
16/12/07 22:47:47 INFO balancer.Balancer: namenodes = [hdfs://chupa1.openstacklocal:8020]
16/12/07 22:47:47 INFO balancer.Balancer: parameters = Balancer.BalancerParameters [BalancingPolicy.Node, threshold = 1.0, max idle iteration = 5, #excluded nodes = 0, #included nodes = 0, #source nodes = 0, #blockpools = 0, run during upgrade = false]
16/12/07 22:47:47 INFO balancer.Balancer: included nodes = []
16/12/07 22:47:47 INFO balancer.Balancer: excluded nodes = []
16/12/07 22:47:47 INFO balancer.Balancer: source nodes = []
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
16/12/07 22:47:49 INFO balancer.KeyManager: Block token params received from NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
16/12/07 22:47:49 INFO block.BlockTokenSecretManager: Setting block keys
16/12/07 22:47:49 INFO balancer.KeyManager: Update block keys every 2hrs, 30mins, 0sec
16/12/07 22:47:50 INFO balancer.Balancer: dfs.balancer.movedWinWidth = 5400000 (default=5400000)
16/12/07 22:47:50 INFO balancer.Balancer: dfs.balancer.moverThreads = 1000 (default=1000)
16/12/07 22:47:50 INFO balancer.Balancer: dfs.balancer.dispatcherThreads = 200 (default=200)
16/12/07 22:47:50 INFO balancer.Balancer: dfs.datanode.balance.max.concurrent.moves = 5 (default=5)
16/12/07 22:47:50 INFO balancer.Balancer: dfs.balancer.getBlocks.size = 2147483648 (default=2147483648)
16/12/07 22:47:50 INFO balancer.Balancer: dfs.balancer.getBlocks.min-block-size = 10485760 (default=10485760)
16/12/07 22:47:50 INFO block.BlockTokenSecretManager: Setting block keys
16/12/07 22:47:50 INFO balancer.Balancer: dfs.balancer.max-size-to-move = 10737418240 (default=10737418240)
16/12/07 22:47:50 INFO balancer.Balancer: dfs.blocksize = 134217728 (default=134217728)
16/12/07 22:47:50 INFO net.NetworkTopology: Adding a new node: /default-rack/172.26.76.168:1019
16/12/07 22:47:50 INFO net.NetworkTopology: Adding a new node: /default-rack/172.26.76.166:1019
16/12/07 22:47:50 INFO net.NetworkTopology: Adding a new node: /default-rack/172.26.76.167:1019
16/12/07 22:47:50 INFO balancer.Balancer: 0 over-utilized: []
16/12/07 22:47:50 INFO balancer.Balancer: 0 underutilized: []
The cluster is balanced. Exiting...
Dec 7, 2016 10:47:50 PM 0 0 B 0 B 0 B
Dec 7, 2016 10:47:50 PM Balancing took 3.202 seconds
... View more
12-03-2016
12:33 AM
2 Kudos
Hi Raja,
In ranger 0.5 it is a lot more difficult to get the data from the db directly. What I have done in the past is use the Rest APIs to download the content and flatten out the data. You can get the rest apis from your browsers developer tool utilities. For example,
json curl curl 'http://127.0.0.1:6080/service/plugins/policies?page=0&pageSize=25&startIndex=0&serviceType=hive&group=&_=1451414078895' -H 'Cookie: AMBARISESSIONID=bkhp0zgjk8be1a5hy438zyk7r; JSESSIONID=DACAF20F2DDF50BD05BFA82C37AE758F; clientTimeOffset=480' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36' -H 'Accept: application/json, text/javascript, */*; q=0.01' -H 'Referer: http://127.0.0.1:6080/index.html' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' --compressed or xml curl curl 'http://127.0.0.1:6080/service/plugins/policies?page=0&pageSize=25&startIndex=0&serviceType=hive&group=&_=1451414078895' -H 'Accept-Encoding: gzip, deflate, sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' -H 'Cookie: AMBARISESSIONID=bkhp0zgjk8be1a5hy438zyk7r; JSESSIONID=DACAF20F2DDF50BD05BFA82C37AE758F; clientTimeOffset=480' -H 'Connection: keep-alive' --compressed If you change the serviceType= to other components you can get the reports from those components, etc. Once you have the data you can use something like xpath to parse the data: Forexample, Say I want all hive databases in all my policies rangerPolicyList/policies/resources/entry[key='database']/value//values Say I want all users: /rangerPolicyList/policies/policyItems/users Say I want all groups: /rangerPolicyList/policies/policyItems/groups Or you can use a parsing tool of your choice. Forexample, http://www.convertcsv.com/json-to-csv.htm to convert the json file to csv and import into excel. In HDP version 2.5 you can download reports in Ranger to csv/excel.
... View more
10-24-2016
11:26 PM
Hi Jennie, You can just pull the query out to a file and check how big the file is.
... View more
06-08-2016
01:44 AM
5 Kudos
SYMPTOM: Specifying a valid group in Ranger Knox policy results in a 403 authorization error. ERROR: The result is a 403 forbidden error.
ROOT CAUSE: Most likely the cause of this issue is that the topology is not setup in Knox for ldap groups to be passed to Ranger from the Knox plugin. RESOLUTION: Make sure the following values are present and correct in the topology: <!-- changes needed for group sync-->
<param>
<name>main.ldapRealm.authorizationEnabled</name>
<value>true</value>
</param>
<param>
<name>main.ldapRealm.groupSearchBase</name>
<value>OU=MyUsers,DC=AD-HDP,DC=COM</value>
</param>
<param>
<name>main.ldapRealm.groupObjectClass</name>
<value>group</value>
</param>
<param>
<name>main.ldapRealm.groupIdAttribute</name>
<value>cn</value>
</param> ALTERNATIVE SOLUTION: Instead of getting the above LDAP group settings working open up Knox Authorization to everyone by using the 'Public' group value on the Knox policy and then do authorization at the other service level policies like HDFS, HIVE, HBASE, etc. DEBUG TECHNIQUES This knox log setting should show you what is getting passed to RANGER from the KNOX Plugin.
Modify the gateway-log4j.properties like below, restart Knox and review the ranger Knox plugin log in the file ranger.knoxagent.log #Ranger Knox Plugin debug
ranger.knoxagent.logger=DEBUG,console,KNOXAGENT
ranger.knoxagent.log.file=ranger.knoxagent.log
log4j.logger.org.apache.ranger=${ranger.knoxagent.logger}
log4j.additivity.org.apache.ranger=false
log4j.appender.KNOXAGENT =org.apache.log4j.DailyRollingFileAppender
log4j.appender.KNOXAGENT.File=${app.log.dir}/${ranger.knoxagent.log.file}
log4j.appender.KNOXAGENT.layout=org.apache.log4j.PatternLayout
log4j.appender.KNOXAGENT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n %L
log4j.appender.KNOXAGENT.DatePattern=.yyyy-MM-dd
... View more
- Find more articles tagged with:
- groups
- Issue Resolution
- issue-resolution
- Knox
- Ranger
- Security
Labels:
06-01-2016
01:48 AM
SYMPTOM:
KMS gets 500 error when decrypting files when being accessed from another one way trust realm.
ERRORS:
Command line error:
[root@support ~]$ hdfs dfs -cat /zone_encr3/abc1.txt
cat: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, status: 500, message: Internal Server Error
KMS Stack Trace error:
2016-05-04 15:44:21,677 ERROR [webservices-driver] - Servlet.service() for servlet [webservices-driver] in context with path [/kms] threw exception
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to user06@HDP.COM
at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:377)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:347)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:519)
at org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:501)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
ROOT CAUSE:
There is a one way trust from another realm and auth to local rules are not given to the KMS configuration.
RESOLUTION:
hadoop.kms.authentication.kerberos.name.rules property needs to have the auth to local rules.
By default this property is set to DEFAULT. If you replace this value with the value from auth_to_local (core-site.xml) and restart ranger KMS service then we will see a user from another realm is able to decrypt the file successfully.
Copy the rules from core-site.xml (auth_to_local property) to "hadoop.kms.authentication.kerberos.name.rules" in KMS and restart KMS service.
Note:- When you paste the rules from auth_to_local to "hadoop.kms.authentication.kerberos.name.rules", the rules are pasted with space separated values instead of newline. This is fine.
... View more
- Find more articles tagged with:
- auth-to-local
- Issue Resolution
- issue-resolution
- KMS
- ranger-kms
- rules
- solutions
Labels:
06-01-2016
01:47 AM
2 Kudos
1. Make sure webhbase is running: [root@ambari-server]# curl http://localhost:60080/version
rest 0.0.3 [JVM: Oracle Corporation 1.8.0_60-25.60-b23] [OS: Linux 2.6.32-504.el6.x86_64 amd64] [Server: jetty/6.1.26.hwx] [Jersey: 1.9] 2. If the webhdfs REST daemon is not running then you will need to start it. [root@ambari-server]# /usr/hdp/current/hbase-client/bin/hbase-daemon.sh start rest -p 60080
starting rest, logging to /var/log/hbase/hbase-root-rest-ambari-server.support.com.out [root@ambari-server security]# ps -ef | grep 60080
root 19147 1 0 20:17 pts/1 00:00:00 bash /usr/hdp/current/hbase-client/bin/hbase-daemon.sh --config /usr/hdp/current/hbase-client/bin/../conf foreground_start rest -p 60080
root 19161 19147 28 20:17 pts/1 00:00:03 /usr/jdk64/jdk1.8.0_60/bin/java -Dproc_rest -XX:OnOutOfMemoryError=kill -9 %p -Dhdp.version=2.3.4.0-3485 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_client_jaas.conf -Djava.io.tmpdir=/tmp -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201605312017 -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-root-rest-ambari-server.support.com.log -Dhbase.home.dir=/usr/hdp/current/hbase-client/bin/.. -Dhbase.id.str=root -Dhbase.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.4.0-3485/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.4.0-3485/hadoop/lib/native -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.rest.RESTServer -p 60080 start 3. Test the webhbase call through knox. (https://cwiki.apache.org/confluence/display/KNOX/Examples+HBase) [root@ambari-slave2 topologies]# curl -ik -u admin:admin-password -H "Accept: text/xml" -X GET 'https://localhost:8443/gateway/default/hbase/version'
HTTP/1.1 200 OK
Set-Cookie: JSESSIONID=vs8e1emsk00h1va2vk5aeq2b9;Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache
Content-Type: text/xml
Content-Length: 192
Server: Jetty(8.1.14.v20131031)
<?xml version="1.0" standalone="yes"?><Version JVM="Oracle Corporation 1.8.0_60-25.60-b23" REST="0.0.3" OS="Linux 2.6.32-504.el6.x86_64 amd64" Server="jetty/6.1.26.hwx" Jersey="1.9"></Version>
... View more
- Find more articles tagged with:
- connection
- HBase
- How-ToTutorial
- Knox
- Security
- tip
Labels:
05-31-2016
07:42 PM
1 Kudo
1. Username and password input fields in ranger should be the ldap user and password authenticating through knox. 2. Configure the admin topology to meet your configuration needs by default it points to the demo ldap server on knox server. 3. To test manually, login to the URL directly using ldap credentials from the ranger server. https://<knoxhost>:8443/gateway/admin/api/v1/topologies 4. Make sure the ranger server trusts the knox certificate being presented; if it doesn't; you will need to add the knox gateway trust certificate to the ranger/java truststore. 5. Currently knox service lookups don't work in ranger and a bug has been filed to fix this. Currently, as of HDP 2.4, only topology lookup in ranger works.
... View more
- Find more articles tagged with:
- How-ToTutorial
- Knox
- knox-gateway
- Ranger
- ranger-admin
- solutions
Labels:
05-17-2016
11:00 PM
5 Kudos
SYMPTOM:
Hive query through knox fails quickly with a a 500 error.
ERROR: You will find something like this in the gateway-audit.log dispatch|uri|http://<hostname>:10000/cliservice?doAs=dav|success|Response status: 500
ROOT CAUSE:
Size of query being sent is larger than default replayBufferSize.
RESOLUTION:
Change your topology to contain the following parameter in the service that is having the issue. <param><name>replayBufferSize</name><value>16</value></param> For example if I had this issue in hive it would look something like: <service>
<role>HIVE</role>
<url>$HTTP_SCHEME://$HIVE_HOST:10001/cliservice</url>
<param><name>replayBufferSize</name><value>16</value></param>
</service> The Value of the replayBufferSize is in KB and has a default of 8KB. Make sure the value is larger than your largest query size in order to get past this issue.
... View more
- Find more articles tagged with:
- Issue Resolution
- Knox
- knox-0.5.0
- knox-gateway
- query
- Security
Labels:
- « Previous
- Next »