Member since
05-21-2018
47
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1702 | 09-10-2018 01:18 AM |
02-14-2019
02:18 AM
@AmitAdhau Could you kindly help me out with steps to deploy tls using self-signed certificate.
... View more
01-31-2019
09:57 PM
could you share the steps fo jce policy file i have install in /usr/java/jdk1.8.0_171/jre/bin/security/US_export_policy.jar /usr/java/jdk1.8.0_171/jre/bin/security/local_policy.jar
... View more
01-31-2019
09:55 PM
any suggestions on this??
... View more
09-10-2018
01:18 AM
its there only at the bottom http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/ for repo see at bottom
... View more
07-28-2018
06:44 AM
i followed this blog but didint work. https://michlstechblog.info/blog/linux-kerberos-authentification-against-windows-active-directory/#more-1628 https://community.cloudera.com/t5/Cloudera-Manager-Installation/Import-KDC-Account-Manager-Credentials-Command-failed/m-p/48519#M8974 https://community.cloudera.com/t5/Cloudera-Manager-Installation/Enabling-Keberos-for-cluster-fails-when-importing-KDC/td-p/65736/page/2 nothing worked for me. my krb5.conf file [root@aa1 singhkabir880]# cat /etc/krb5.conf# Configuration snippets may be placed in this directory as wellincludedir /etc/krb5.conf.d/ [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false default_realm = HADOOP.COM default_ccache_name = KEYRING:persistent:%{uid} default_tgs_enctypes = rc4-hmac default_tkt_enctypes = rc4-hmac permitted_enctypes = rc4-hmac [realms] HADOOP.COM = { kdc = aa1.c.true-shore-210608.internal admin_server = aa1.c.true-shore-210608.internal supported_enctypes = rc4-hmac } [domain_realm] .hadoop.com = HADOOP.COM hadoop.com = HADOOP.COM [root@aa1 singhkabir880]# Kindly suggest how to move further. Thanks
... View more
07-28-2018
06:43 AM
i am getting below error when i tried to enbale kerberos using Cloudera Manager after setting up kdc server and admin principal. Enable Kerberos for Cluster 1 Import KDC Account Manager Credentials Command Status Failed Jul 28, 1:33:43 PM 5.02s /usr/share/cmf/bin/import_credentials.sh failed with exit code 1 and output of <<
+ export PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/sbin:/usr/sbin:/bin:/usr/bin
+ PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/sbin:/usr/sbin:/bin:/usr/bin
+ KEYTAB_OUT=/var/run/cloudera-scm-server/cmf7587283748839759414.keytab
+ USER=admin/admin@HADOOP.COM
+ PASSWD=REDACTED
+ KVNO=1
+ SLEEP=0
+ RHEL_FILE=/etc/redhat-release
+ '[' -f /etc/redhat-release ']'
+ set +e
+ grep Tikanga /etc/redhat-release
+ '[' 1 -eq 0 ']'
+ '[' 0 -eq 0 ']'
+ grep 'CentOS release 5' /etc/redhat-release
+ '[' 1 -eq 0 ']'
+ '[' 0 -eq 0 ']'
+ grep 'Scientific Linux release 5' /etc/redhat-release
+ '[' 1 -eq 0 ']'
+ set -e
+ '[' -z /var/run/cloudera-scm-server/krb52763805900583239514.conf ']'
+ echo 'Using custom config path '\''/var/run/cloudera-scm-server/krb52763805900583239514.conf'\'', contents below:'
+ cat /var/run/cloudera-scm-server/krb52763805900583239514.conf
+ IFS=' '
+ read -a ENC_ARR
+ for ENC in '"${ENC_ARR[@]}"'
+ echo 'addent -password -p admin/admin@HADOOP.COM -k 1 -e rc4-hmac'
+ ktutil
+ '[' 0 -eq 1 ']'
+ echo REDACTED
+ echo 'wkt /var/run/cloudera-scm-server/cmf7587283748839759414.keytab'
+ chmod 600 /var/run/cloudera-scm-server/cmf7587283748839759414.keytab
+ kinit -k -t /var/run/cloudera-scm-server/cmf7587283748839759414.keytab admin/admin@HADOOP.COM
kinit: KDC has no support for encryption type while getting initial credentials
>>
... View more
Labels:
- Labels:
-
Kerberos
07-26-2018
10:35 PM
Do i need two hive metastore server to run two hive server2 or only one metastore server will cater request of both hive server2.??
... View more
07-26-2018
12:20 AM
I have 3 node cluster and install haproxy on node1.
the list of hive services running on my node 1 are : Metastore server, Webcat server, and HiveServer2.
one instance of hive server2 is running on node 2 and its working fine.
But node1 hive server2 is not running and showing this errot:
Error starting HiveServer2: could not start ThriftBinaryCLIService
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10000.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:109)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:91)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:87)
at org.apache.hadoop.hive.common.auth.HiveAuthUtils.getServerSocket(HiveAuthUtils.java:87)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:67)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Address already in use (Listen failed)
at java.net.PlainSocketImpl.socketListen(Native Method)
at java.net.AbstractPlainSocketImpl.listen(AbstractPlainSocketImpl.java:399)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:106)
on cli its showing haproxy is using the port 10000
[root@cm1 singhkabir880]# netstat -tulpn | grep 10000tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 882/haproxy [root@cm1 singhkabir880]#
i have add hive server2 role first then install haproxy service and made below changes in :
/etc/haproxy/haproxy.cfg
:
listen hiveserver2 :10000
#haproxy will listen in port 10000 for hiveserver2 client requests.
mode tcp
option tcplog
balance leastconn
#tcp – connection mode between haproxy to hive servers
#leastconn – requests will be sent to server with less connection
server server1 node1:10000
server server2 node2:10000
What should do i need to order to start newly added hive server2 ?
Kindly Suggest.
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera Manager
07-26-2018
12:13 AM
Thanks @bgooley Its running now. I have installed the hbase again and itts working fine now. Thanks for support.
... View more
07-26-2018
12:08 AM
I have 3 node cluster and install haproxy on node1. the list of hive services running on my node 1 are : Metastore server, Webcat server, and HiveServer2. one instance of hive server2 is running on node 2 and its working fine. But node1 hive server2 is not running and showing this errot: Error starting HiveServer2: could not start ThriftBinaryCLIService
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10000.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:109)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:91)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:87)
at org.apache.hadoop.hive.common.auth.HiveAuthUtils.getServerSocket(HiveAuthUtils.java:87)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:67)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Address already in use (Listen failed)
at java.net.PlainSocketImpl.socketListen(Native Method)
at java.net.AbstractPlainSocketImpl.listen(AbstractPlainSocketImpl.java:399)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:106) on cli its showing haproxy is using the port 10000 [root@cm1 singhkabir880]# netstat -tulpn | grep 10000tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 882/haproxy [root@cm1 singhkabir880]# i have add hive server2 role first then install haproxy service and made below changes in : /etc/haproxy/haproxy.cfg : listen hiveserver2 :10000
#haproxy will listen in port 10000 for hiveserver2 client requests.
mode tcp
option tcplog
balance leastconn
#tcp – connection mode between haproxy to hive servers
#leastconn – requests will be sent to server with less connection
server server1 node1:10000
server server2 node2:10000 What should do i need to order to start newly added hive server2 ? Kindly Suggest.
... View more