Member since
10-12-2017
35
Posts
1
Kudos Received
0
Solutions
02-21-2018
08:15 PM
Hi Team, Iam getting different result while executing below command select count (*) --output 816293 & Select * from --output 809,254 rows ( which is correct) Also tried by using " analyze table TABLE_NAME partition(partition_date) compute statistics ; " and even done with running "MSCK REPAIR TABLE <tablename>;" But no use. Is their anything that I am missing. Kindly Need help at this point .
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
02-14-2018
07:45 PM
Got this below error due to hiveserver2 went down. After Hiveserver2 was up and running the sqoop command worked.
... View more
02-13-2018
09:54 PM
Hey @Scott Shaw, Thanks for the update. Before post this issue I have already gone through the link which you have provided and its says's about. FAILED Error: java.io.IOException: SQLException in nextKeyValue at and Causedby: java.sql.SQLException:Value'0000-00-00' can not be represented as java.sql.Date But mine is about FAILED Error: java.io.IOException: SQLException in nextKeyValue at and Caused by: java.sql.SQLRecoverableException: No more data to read from socket
... View more
02-12-2018
07:50 PM
I tried to run Sqoop import from oracle db to hdp hive, it has thrown an error below. 18/02/12 07:48:11 INFO mapreduce.Job: Task Id : attempt_1510351993144_42440_m_000000_0, Status : FAILED
Error: java.io.IOException: SQLException in nextKeyValue
at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: java.sql.SQLRecoverableException: No more data to read from socket
at oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:456)
at oracle.jdbc.driver.DynamicByteArray.unmarshalCLR(DynamicByteArray.java:181)
at oracle.jdbc.driver.T4CMarshaller$BasicMarshaller.unmarshalBytes(T4CMarshaller.java:124)
at oracle.jdbc.driver.T4CMarshaller$BasicMarshaller.unmarshalOneRow(T4CMarshaller.java:101)
at oracle.jdbc.driver.T4CCharAccessor.unmarshalOneRow(T4CCharAccessor.java:208)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:1474)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:1282)
at oracle.jdbc.driver.T4C8Oall.readRXD(T4C8Oall.java:851)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:448)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1066)
at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3716)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:1015)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:979)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:579)
at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:237)
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Sqoop
02-12-2018
03:14 PM
Can you please let me know where to try this?
... View more
01-25-2018
10:21 PM
Hi All, We are facing Heartbeat lost for one of our node. All nodes ambari-agent is up and running fine even ambari-server too. Done with restarting ambari-server and ambari-agent too but it dint resolved.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
01-16-2018
07:45 PM
Hey @Montrial Harrell, We are facing this once or twice a week and we use to solve this issue by restarting the hive server. Can you let me know what and where did you increase the limit to clear this issue.
... View more
12-02-2017
05:40 PM
@Deepak Sharma in the command "/usr/hdp/current/zookeeper-client/bin/zookeeper-client -server <ZK1>:2181,<ZK2>:2181" <ZK1>:2181,<ZK2>:2181 ---- I dint understand. Could you explain it.
... View more