Member since
10-20-2015
92
Posts
78
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4092 | 06-25-2018 04:01 PM | |
6905 | 05-09-2018 05:36 PM | |
2424 | 03-16-2018 04:11 PM | |
7573 | 05-18-2017 12:42 PM | |
6323 | 03-28-2017 06:42 PM |
02-09-2017
08:16 PM
12 Kudos
AD admins may be busy and you may happen to know the ambari admin principal for enabling Kerberos. How would you go about adding a principal for AD with this information and add it to your kerberos keytab? Below is one way to do it. Thanks to @Robert Levas for collaborating with me on this.
1. Create LDIF file ad_user.ldif. (Make sure there are no spaces at the ends of each of these lines)
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: add
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
distinguishedName: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
cn: HTTP/loadbalancerhost
userAccountControl: 514
accountExpires: 0
userPrincipalName: HTTP/loadbalancerhost@HOST.COM
servicePrincipalName: HTTP/loadbalancerhost
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=host,DC=com
changetype: modify
replace: unicodePwd
unicodePwd::IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA==
dn: CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM
changetype: modify
replace: userAccountControl
userAccountControl: 66048
Do not have spaces at the ends of the above lines or you will get an error like the following:
ldap_add: No such attribute (16)
additional info: 00000057: LdapErr: DSID-0C090D8A, comment: Error in attribute conversion operation, data 0, v2580
2. Create unicode Password for the above principal with the password hadoopRocks123!. Replace unicodePWD field in step 1:
[root@host1 ~]# echo -n '"hadoopRocks123!"' | iconv -f UTF8 -t UTF16LE | base64 -w 0
IgBoAGEAZABvAG8AcABSAG8AYwBrAHMAMQAyADMAIQAiAA==
3. Add the account to AD:
[root@host1 ~]# ldapadd -x -H ldaps://sme-2012-ad.support.com:636 -D "test1@host.com" -W -f add_user.ldif
Enter LDAP Password:
adding new entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=com"
modifying entry "CN=HTTP/loadbalancerhost,OU=dav,OU=hortonworks,DC=HOST,DC=COM"
4. Test the account with kinit:
[root@host1 ~]# kinit HTTP/loadbalancerhost@HOST.COM
Password for HTTP/loadbalancerhost@HOST.COM:
[root@host1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HTTP/loadbalancerhost@HOST.COM
Valid starting Expires Service principal
02/09/17 19:02:33 02/10/17 19:02:33 krbtgt/HOST.COM@HOST.COM
renew until 02/09/17 19:02:33
5. Take it one step further if you need to add the principal to a keytab file
[root@host1 ~]# ktutil
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes128-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e aes256-cts-hmac-sha1-96
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e arcfour-hmac-md5-exp
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des3-cbc-sha1
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: add_entry -password -p HTTP/loadbalancerhost@HOST.COM -k 1 -e des-cbc-md5
Password for HTTP/loadbalancerhost@HOST.COM:
ktutil: write_kt spenego.service.keytab
ktutil: exit
[root@host1 ~]# klist -ket spenego.service.keytab
Keytab name: FILE:lb.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes128-cts-hmac-sha1-96)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (aes256-cts-hmac-sha1-96)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (arcfour-hmac-exp)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des3-cbc-sha1)
1 01/18/17 03:12:38 HTTP/loadbalancerhost@HOST.COM (des-cbc-md5)
... View more
Labels:
01-19-2017
07:34 PM
Hi @Qi Wang This should help you to learn by example when it comes to configuring your knox groups and how it relates to your ldapsearch. See Sample 4 specifically https://cwiki.apache.org/confluence/display/KNOX/Using+Apache+Knox+with+ActiveDirectory Hope this helps.
... View more
12-27-2016
07:51 PM
3 Kudos
PROBLEM: Some users may be associated to many groups causing a very long list of groups to be passed through the Rest APIs headers in Ranger and KMS. ERROR: error log from /var/log/ranger/kms/kms.log 2016-12-01 14:04:12,048 INFO Http11Processor - Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504)
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396)
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
2016-12-01 14:04:12,074 INFO Http11Processor - Error parsing HTTP request header
Note: further occurrences of HTTP header parsing errors will be logged at DEBUG level.
java.lang.IllegalArgumentException: Request header is too large
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:515)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:504)
at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:396)
at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:271)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1007)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
ROOT CAUSE: Rest API calls being passed with large headersizes in this case users with large amount of groups that exceed the webservers maxHttpHeaderSize. SOLUTION:
In Ambari go to Ranger Admin->config->Advanced Tab->Custom ranger-admin-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.
Save the changes and then go to Ranger KMS->config->Advanced Tab->Custom ranger-kms-site->Add Property. Put ranger.service.http.connector.property.maxHttpHeaderSize in Key field and provide the required value for maxHttpHeaderSize attribute in Value field.
Save the changes and restart all Ranger and Ranger KMS services.
... View more
Labels:
12-25-2016
08:09 PM
SYMPTOM: Knox logs are filling up disk space ROOT CAUSE:
Kerberos debug is turned on by default causing the gateway.out file to grow rapidly. RESOLUTION:
To turn off kerberos debug logging.
1. Go to Ambari. KNOX -> Configs-> Advanced gateway-site
2. Change parameter sun.security.krb5.debug from true to false.
3. Restart Knox.
... View more
Labels:
12-25-2016
08:06 PM
SYMPTOM:
Hbase is giving an error KeyValue size too large when inserting large data sizes. ERROR: 0java.lang.IllegalArgumentException: KeyValue size too large
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1521)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.validatePut(BufferedMutatorImpl.java:147)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:134)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:105)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1050)
at org.apache.hadoop.hbase.rest.RowResource.update(RowResource.java:229)
at org.apache.hadoop.hbase.rest.RowResource.put(RowResource.java:318)
at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:134)
at ROOT CAUSE:
hbase.client.keyvalue.maxsize is set too low RESOLUTION:
Set hbase.client.keyvalue.maxsize=0 This will allow the client key value to be allowed. Just be careful with this as too large of a keyvalue >1-2gb could have performance implications.
... View more
Labels:
12-25-2016
07:48 PM
Problem:
Time to live for collection data was never set on Solr Cloud server and caused the disk and nodes to fill up with too many documents due to ranger audit.
Solution:
1. Delete the collection through Solr api because the ranger archive was stored in HDFS anyways. (The following cleared up disk space)
http://<solr_host>:<solr_port>/solr/admin/collections?action=DELETE&name=collection
2. Set the configuration in solr to have a time to live.
a. Download each of the following configs from zookeeper: schema.xml, solrconfig.xml, and managed-schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/solrconfig.xml >/tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/schema.xml >/tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd get /ranger_audits/configs/ranger_audits/managed-schema >/tmp/managed-schema
b. Added the following to solrconfig
<updateRequestProcessorChain name="add-unknown-fields-to-the-schema">
<processor class="solr.DefaultValueUpdateProcessorFactory">
<str name="fieldName">_ttl_</str>
<str name="value">+90DAYS</str>
</processor>
<processor class="solr.processor.DocExpirationUpdateProcessorFactory">
<int name="autoDeletePeriodSeconds">86400</int>
<str name="ttlFieldName">_ttl_</str>
<str name="expirationFieldName">_expire_at_</str>
</processor>
<processor class="solr.FirstFieldValueUpdateProcessorFactory">
<str name="fieldName">_expire_at_</str>
</processor>
c. Added the following to schema.xml and managed-schema.xm
<field name="_expire_at_" type="tdate" multiValued="false" stored="true" docValues="true"/>
<field name="_ttl_" type="string" multiValued="false" indexed="true" stored="true"/>
d. Uploaded each edited file to zookeeper.
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/solrconfig.xml /tmp/solrconfig.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/schema.xml /tmp/schema.xml
/opt/hostname-hdpsearch/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost <zookeeper host>:<zookeeper port> -cmd putfile ranger_audits/configs/ranger_audits/managed-schema /tmp/managed-schema
3.Make sure on each node the ranger_audit replicas were removed from the solr directories in the local filesystem.
4. Lastly we issue the create command
http://<host>:8983/solr/admin/collections?action=CREATE&name=ranger_audits&collection.configName=ranger_audits&numShards=2
MORE INFO:
Check out more Solr General info and Ambari-infra TTL info here: https://community.hortonworks.com/articles/63853/solr-ttl-auto-purging-solr-documents-ranger-audits.html
... View more
Labels:
03-16-2017
10:39 AM
Hi, This bug can have consequences on Spark / Yarn as well. We were encountering Out of Memory conditions running Spark job, not matter how much memory we assigned, we kept ending up exhausting it completely. This behaviour actually disappeared when we applied the fix listed here. I'll post back when I know more about the root cause & link between issues. Regards, Christophe
... View more
12-23-2016
04:51 PM
3 Kudos
There was a failure during "Configure Ambari Identity", but retry passed. So I thought it was not really a problem. I am sure the sudo rule is the problem. Will try again and let you know the outcome. Update the sudo permission and got another error: "you must have a tty to run sudo". this turns out to be related to sudo setting, use visudo to comment requiretty fixed the problem visudo
#Defaults requiretty
... View more
12-20-2016
06:36 PM
3 Kudos
Hi, we have an official document for deploy HDP on VMs: https://hortonworks.com/wp-content/uploads/2014/02/1514.Deploying-Hortonworks-Data-Platform-VMware-vSphere-0402161.pdf. It has reference link on HVE (NodeGroup) feature which include details you may want to know.
... View more
04-24-2018
09:04 PM
Keep in mind, Taxonomy feature is still in Tech Preview (ie. not recommended for production use) and will not be supported. Taxonomy will be production ready or GA in HDP 3.0
... View more
- « Previous
-
- 1
- 2
- Next »