Member since
07-21-2016
101
Posts
10
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3804 | 02-15-2020 05:19 PM | |
69099 | 10-02-2017 08:22 PM | |
1493 | 09-28-2017 01:55 PM | |
1708 | 07-25-2016 04:09 PM |
02-28-2022
09:04 AM
Hello @kums Hs2 will play key role to execute the quires and retrieve the data from filesystem. please follow below link for best practices of Hs2 heap recommendations. Hiveserver2 Heap Size Recommendations
... View more
08-26-2021
04:09 AM
Access logs from RM UI : Using RM UI > Applications > click on your failed application > application master > then goto the logs at the end of screen . - it's good idea to check your container logs which may shows whats happening . - if you are using hive-tez kinda of issues appear if missing class , once the container start loading these class for tez , because count(*), insert queries will use yarn to submit new application ; so it's good check classpath for tez , and check if you have correct tez.tar.gz on the HDFS which contains all needed jars .
... View more
02-15-2020
05:19 PM
Alright....all good now...the problem was with AD....in our Environment..KDC is AD..in AD there are 2 field names "User logon name" and "user logon name(pre windows 2000)" . Usually the value of these attributes are same..In this case, all the user names were generated automatically when we kerberize the cluster..for these user names "user logon name" and "user logon name(pre windows 2000)" were different. The "user logon name(pre windows 2000" was an 20 character alphanumeric. IN kerberized cluster, the service accounts has to impersonate all Hadoop service accounts like "nn', "dn","rm". So we edited all the service accounts in AD i,e "user logon name(pre windows 2000)" were made to be same as "User logon name" . IN HDFS config...there is a property "Auth_to_Local mappings". We added rules to convert the pattern(service account name in AD) to local service users (hdfs, nn, hive, dn ..etc etc)
... View more
11-04-2019
02:26 AM
How did you find that some process is destroying the ticket? I am also facing same issue.
... View more
01-29-2019
03:11 AM
Hi @Kumar Veerappan, I guess you are referring to this blog for Customizing the Ambari Alert : https://cwiki.apache.org/confluence/display/AMBARI/Customizing+the+Alert+Template then you can change the alert to Your desired one with help of editing the <subject></subject> in alert-templates.xml : https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/alert-templates.xml#L21 Just for example, I have some changes on subject for my Business logic : <subject>
<![CDATA[
#set( $criticalServices = $summary.getServicesByAlertState("CRITICAL"))
#if( $summary.getCriticalCount() ==0)
There is no new critical Alert!
#{else}
We have $summary.getCriticalCount() Critical Alert(s).Alert Details : ,#foreach( $service in $criticalServices ) #foreach( $alert in $summary.getAlerts($service,"CRITICAL") ) Service Name : $alert.getServiceName(),Host Name : $alert.getHostName()
#end
#end
#end
]]>
</subject> Which is doing nothing but Listing critical alert count in the Email subject. See if you can Makeuse of some thing out of it. Please give a Accept answer if it did.
... View more
09-05-2018
04:05 PM
Hi @Kumar Veerappan, I am not sure about how to answer that without knowing your cluster specifications and artitecture. but willn't automatic failover's help you ? If its ambari alerts that are annoying for you. you can investigate , edit the alerts accordingly (you can edit the Alerts on what percent of CPU the alert to trigger ) or even disable it if not required hope this helps you. Please accept answer if it did.
... View more
08-15-2018
11:29 PM
Hello @Kumar Veerappan! Looks like you can't reach the REALM. Check your /etc/krb5.conf, here's my example: MYMAC:etc vmurakami$ cat /etc/krb5.conf
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = EXAMPLE.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
EXAMPLE.COM = {
admin_server = vmurakami-1
kdc = vmurakami-1
} And also, after you got the keytab (if you don't have it, then if it's possible, copy the same keytab valid and used in the HS2 hosts to your mac), check if it's valid with the following command: [root@vmurakami-1 ~]# klist -ef
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: zookeeper/vmurakami-1@EXAMPLE.COM
Valid starting Expires Service principal
08/15/2018 23:23:31 08/16/2018 23:23:31 krbtgt/EXAMPLE.COM@EXAMPLE.COM
Flags: FI, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 If you're still having issues, please share with us the whole error. Hope this helps!
... View more
08-13-2018
08:26 PM
You can use below command to install hive and use beeline brew install hive Or you can get all the required jars used by beeline command. to find the same you can launch beeline in you HDP cluster and run lsof -p <beelineclientpid> to find jars loaded and copy to local Mac. A better solution would be to use JDBC Client tool such as DBeaver or DBVisualizer for Mac
... View more
07-13-2018
08:30 PM
Good to know @Kumar Veerappan! 🙂
... View more
06-26-2018
11:35 PM
3 Kudos
@Kumar Veerappan, is umask set properly in your cluster ? Refer to below article for details. https://community.hortonworks.com/content/supportkb/150234/error-path-disk3hadoopyarnlocalusercachesomeuserap.html
... View more