Member since
07-11-2017
42
Posts
1
Kudos Received
0
Solutions
02-25-2019
01:31 PM
Hi, We are trying to upgrade our cluster from CDH 5.9 to CDH 6.1 We use sqoop2 in 5.9 version but as sqoop2 is deprecated in CDH 6.1 , how can we migrate our sqoop2 metastore data to the upgrade CDH 6.1 Did anyone try this out? It would be of great help if anyone can throw some light on this? Thanks
... View more
Labels:
02-19-2019
02:32 PM
Hi We are facing the same issue, our queries running for more than 1 minute through JDBC connection are failing with query state as session closed and error on the application as "ThriftTransportException" or "SQLException Cant retrieve next row" Did you figure out the fix for this ? It would be great if you can throw some light on this? Thanks
... View more
06-15-2018
08:15 AM
Hi @manuroman, We are getting this error sometimes and sometimes this doesnt show up for the same query.What might be the issue , any idea? Thanks, Renuka
... View more
06-13-2018
11:35 AM
Hi, Our cluster is complaining about the datanode directly space . It shows the below error on cloudera manager Bad : The following DataNode Data Directory are on filesystems with less than 5.0 GiB of their space free. /mnt/hdfs/7/dfs/dn (free: 3.9 GiB (98.27%), capacity: 4.0 GiB) When we checked the space on the disk only 1% of space is used up. /dev/s 3.7T 17G 3.7T 1% /mnt/hdfs/7 We are not able to figure out the root cause of it as space is not our issue here. Can anyone throw some light on this? Thanks, Renuka
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS
06-07-2018
11:37 AM
Hi @manuroman These are the settings used in R for connecting to Impala , but still I am getting the same error. library("rJava") .jinit(parameters = c("-Xms6g", "-Xmx20g")) library("RJDBC") Do I need to change the settings somewhere else or within R? Thanks, Renuka K
... View more
06-07-2018
07:36 AM
Hi, When we are querying from dashboard to Impala through a JDBC connection , the query is failing for the first run which can be seen from the Cloudera Manager Impala queries page and Impala automatically retries the query and returns data on the second run to the dashboard. Due to this the dashboard is becoming slow as it is taking twice the time of the query.Could anyone throw some light for the root cause of this and a solution for it? Thanks, Renuka
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
-
Cloudera Manager
06-07-2018
07:30 AM
Hi, When we query Impala using JDBC connection from R or dashboard , it is throwing this error quite often. Error message: org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:489) at org.apache.hive.service.cli.thrift.TCLIService$Client.FetchResults(TCLIService.java:476) at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:225) at info.urbanek.Rpackage.RJDBC.JDBCResultPull.fetch(JDBCResultPull.java:77) Error in .jcall(rp, "I", "fetch", stride, block) :
java.sql.SQLException: Error retrieving next row Sometimes it works fine and other times it is throwing the ThriftTransport exception. Did anyone face this issue , could anyone help me with the root cause of it and what needs to be done in this case? Thanks, Renuka
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Impala
06-07-2018
07:23 AM
Hi, No I didnot proceed , did it work for you? Thanks
... View more
10-16-2017
07:24 AM
Hi, My destination cluster had edge nodes so the export is trying to connect to the edge node on port 8020 and not the name node as name node hdfs port is 8020 Does this export works when I try to connect to edge node and not name node as the name node can be connected only from edge node. Thanks, Renuka
... View more
10-11-2017
02:33 PM
Hi, I have created a snapshot of the hbase table and clone it but when I am trying to export to another cluster I am getting connection refused. I am not running the below command as a hbase user . How can I change to hbase user and will that help in this command execution? hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot tbl_snapshot -copy-to hdfs://<cluster2>:8020/hbase -mappers 16 Error: Exception in thread "main" java.net.ConnectException: Call From cluster1 to cluster2:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731) at org.apache.hadoop.ipc.Client.call(Client.java:1470) at org.apache.hadoop.ipc.Client.call(Client.java:1403) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) at com.sun.proxy.$Proxy16.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTr anslatorPB.java:757) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy17.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2102) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1214) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1210) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1210) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1409) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:895) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:1024) at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1028) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:708) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) at org.apache.hadoop.ipc.Client.call(Client.java:1442) ... 21 more Could anyone help me with this? Thanks
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Manager
-
HDFS