Member since
05-07-2015
11
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8883 | 05-13-2016 12:34 AM |
05-10-2019
02:29 AM
Hi, We are also seeing a problem when query run from Hue query gets completed but when we see the running queries in Impala cloudera Manager, we see its in executing state. Can you please help us understand what could be the reason. Regards Ajay chaudhary
... View more
02-12-2019
09:38 PM
Hi, We are unable to connect to Impala daemon through Impala-shell and Hue when we update the load balancer property in Cloudera manager. Not sure if we have any bug around this. if you can, Can you please pass on the steps as how to merge the existing keytab with the proxy’s keytab. Regards Ajay chaudhary
... View more
02-11-2019
11:59 PM
Please note this cluster is setup on AWS EC2 instances. ELB is created on AWS which forwards the request coming on port 25003 to EC2 machine which host HAProxy and HAProxy is setup on EC2 machine which does not host Impala daemons so it forwards the request to another EC2 machine which host Impala daemon. Regards Ajay
... View more
02-11-2019
11:48 PM
Hi, Thank you for your response. We have not set the load balancer property value in Impala configurations. here is the details - <tr > <td> <samp>internal_principals_whitelist (string)</samp></td> <td>(Advanced) Comma-separated list of additional usernames authorized to access Impala's internal APIs. Defaults to 'hdfs' which is the system user that in certain deployments must access catalog server APIs.</td> <td><samp>hdfs</samp></td> <td><samp>hdfs</samp></td> </tr> <tr > <td> <samp>be_principal (string)</samp></td> <td>Kerberos principal for backend network connections only,overriding --principal if set. Must not be set if --principal is not set.</td> <td><samp></samp></td> <td><samp></samp></td> </tr> <tr class="active"> <td> <samp>principal (string)</samp></td> <td>Kerberos principal. If set, both client and backend network connections will use Kerberos encryption and authentication. Kerberos will not be used for internal or external connections if this is not set.</td> <td><samp></samp></td> <td><samp>impala/master2-impala-20.yodlee.com@YODLEEINSIGHTS.COM</samp></td> </tr> Regards Ajay chaudhary
... View more
02-11-2019
08:51 PM
Hi All,
We need your support on issues we are facing currently.
We are trying to connect to Impala using Cloudera ODBC driver with HA Proxy and Elastic Load Balancer. it is failing with below error.
FAILED!
[Cloudera][DriverSupport] (1110) Unexpected response received from server. Please ensure the server host and port specified for the connection are correct and confirm if SSL should be enabled for the connection.
Cluster Details -
CDH Version 6.1
Clouder ODBC Driver 2.6
Impala daemons run on a machine dn1,dn2, master2
ELB points to only master2 daemon for now.
HAProxy points to only master2 daemon for now.
Cluster is kerberos enabled.
Let us assume
ELB Name - elb-test-odbc.com
HAProxy name - haproxy-name.com
Below combination works -
we put ELB Name in HOST and impala daemon name in HOST FQDN(under kerberos) name it works.
Below combination DOES NOT works -
we put ELB Name in HOST and haproxy-name.com in HOST FQDN(under kerberos) name it does not work.
We actaully want to achieve the resilency so that we are not depenedent on single impala daemon.
Can someone please let us know how can we make it happen. any help on this would be greatly appreciated.
Regards
Ajay chaudhary
... View more
Labels:
- Labels:
-
Apache Impala
-
Kerberos
05-13-2016
12:34 AM
1 Kudo
Hi All, Thanks for your help. Heap memory was not sized as per recommendation given in below link and we had increased the memory and restarted the Hive metastore server which also did not help. http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hiveserver2_configure.html Looks like there were some process which was holding/blocking memory and we had to restart the Complete cluster to resolve this problem. Thank you once again for input. Regards, Ajay chaudhary
... View more
05-04-2016
02:50 AM
Hi All, We started getting very frequently OutofMemory error for Hive Metastore database. Can you please let us know what could be the cause of this? Exception in thread "pool-1-thread-145" java.lang.OutOfMemoryError: Java heap space at java.nio.ByteBuffer.wrap(ByteBuffer.java:350) at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:137) at java.lang.StringCoding.decode(StringCoding.java:173) at java.lang.String.<init>(String.java:443) at java.lang.String.<init>(String.java:515) at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:355) at org.apache.thrift.protocol.TBinaryProtocol.readString(TBinaryProtocol.java:347) at org.apache.hadoop.hive.metastore.api.FieldSchema$FieldSchemaStandardScheme.read(FieldSchema.java:490) at org.apache.hadoop.hive.metastore.api.FieldSchema$FieldSchemaStandardScheme.read(FieldSchema.java:476) at org.apache.hadoop.hive.metastore.api.FieldSchema.read(FieldSchema.java:410) at org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.read(StorageDescriptor.java:1309) at org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.read(StorageDescriptor.java:1288) at org.apache.hadoop.hive.metastore.api.StorageDescriptor.read(StorageDescriptor.java:1150) at org.apache.hadoop.hive.metastore.api.Table$TableStandardScheme.read(Table.java:1393) at org.apache.hadoop.hive.metastore.api.Table$TableStandardScheme.read(Table.java:1330) at org.apache.hadoop.hive.metastore.api.Table.read(Table.java:1186) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args$create_table_argsStandardScheme.read(ThriftHiveMetastore.java:19529) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args$create_table_argsStandardScheme.read(ThriftHiveMetastore.java:19514) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_args.read(ThriftHiveMetastore.java:19461) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:25) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:109) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at org.apache.thrift.transport.TServerSocket.acceptImpl(TServerSocket.java:114) at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.acceptImpl(TServerSocketKeepAlive.java:39) at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.acceptImpl(TServerSocketKeepAlive.java:32) at org.apache.thrift.transport.TServerTransport.accept(TServerTransport.java:31) at org.apache.thrift.server.TThreadPoolServer.serve(TThreadPoolServer.java:131) at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:4245) at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:4147) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Regards, Ajay
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive