Member since
07-11-2017
42
Posts
1
Kudos Received
0
Solutions
06-15-2018
08:15 AM
Hi @manuroman, We are getting this error sometimes and sometimes this doesnt show up for the same query.What might be the issue , any idea? Thanks, Renuka
... View more
06-07-2018
11:37 AM
Hi @manuroman These are the settings used in R for connecting to Impala , but still I am getting the same error. library("rJava") .jinit(parameters = c("-Xms6g", "-Xmx20g")) library("RJDBC") Do I need to change the settings somewhere else or within R? Thanks, Renuka K
... View more
06-07-2018
07:30 AM
Hi, When we query Impala using JDBC connection from R or dashboard , it is throwing this error quite often. Error message: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:489) at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:476) at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:225) at info.urbanek.Rpackage.RJDBC.JDBCResultPull.fetch(JDBCResultPull.java:77)Error in .jcall(rp, "I", "fetch", stride, block) :
java.sql.SQLException: Error retrieving next row Sometimes it works fine and other times it is throwing the ThriftTransport exception. Did anyone face this issue , could anyone help me with the root cause of it and what needs to be done in this case? Thanks, Renuka
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
09-01-2017
05:44 AM
Hi, Setting those values didnot help. CDH/Hive : hive-common-1.1.0-cdh5.12.0.jar
... View more
08-31-2017
03:28 PM
Hi When I am trying to run the hive query I am getting the below error. Could anyone tell me what the issue can be ? Total jobs = 11 Launching Job 1 out of 11 Number of reduce tasks not specified. Estimated from input data size: 2 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask java.lang.RuntimeException: Error caching map.xml: java.io.InterruptedIOException: Call interrupted at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:757) at org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:692) at org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:684) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:370) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) Caused by: java.io.InterruptedIOException: Call interrupted at org.apache.hadoop.ipc.Client.call(Client.java:1496) at org.apache.hadoop.ipc.Client.call(Client.java:1439) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy17.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:260) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy18.mkdirs(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3113) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3080) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1001) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:997) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:997) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:989) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1970) at org.apache.hadoop.hive.ql.exec.Utilities.setPlanPath(Utilities.java:775) at org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:701) ... 7 more Job Submission failed with exception 'java.lang.RuntimeException(Error caching map.xml: java.io.InterruptedIOException: Call interrupted)'
... View more
Labels:
- Labels:
-
Apache Hive
08-11-2017
06:56 AM
1 Kudo
Hi Hi, I removed 2 nodes from the cluster and tried to add them back from managed hosts and the download , distribute is successful but stuck at the activation. But the 2 nodes got added to the cluster and show up in the hosts but CDH version is NONE.(Host in bad health) Host Agent status: This host is in contact with the Cloudera Manager Server. This host is not in contact with the Host Monitor. Not able to run any services on these nodes and getting connection refusal to all ports except "7180" Error in the cloudera-scm-agent Caught unexpected exception in main loop.
Traceback (most recent call last):
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 758, in start
self._init_after_first_heartbeat_response(resp_data)
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/agent.py", line 938, in _init_after_first_heartbeat_response
self.client_configs.load()
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 686, in load
new_deployed.update(self._lookup_alternatives(fname))
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 432, in _lookup_alternatives
return self._parse_alternatives(alt_name, out)
File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.9.0-py2.7.egg/cmf/client_configs.py", line 444, in _parse_alternatives
path, _, _, priority_str = line.rstrip().split(" ")
ValueError: too many values to unpack Could you please help me with this?
... View more
08-10-2017
07:56 AM
We are facing the same issue. What steps did you follow to install the host again?
... View more
08-02-2017
01:53 PM
I have the same problem. Could you please elaborate on the solution to solve this?
... View more
07-31-2017
09:34 AM
Should I pass the /etc/hosts (all nodes on the cluster including the edge node , name node , data node) file in the java code instead of getting it from the host I am connecting to (edge node)
... View more
07-26-2017
01:25 PM
No this didnt solve my issue.
... View more