Member since
10-12-2017
35
Posts
1
Kudos Received
0
Solutions
02-21-2018
08:15 PM
Hi Team, Iam getting different result while executing below command select count (*) --output 816293 & Select * from --output 809,254 rows ( which is correct) Also tried by using " analyze table TABLE_NAME partition(partition_date) compute statistics ; " and even done with running "MSCK REPAIR TABLE <tablename>;" But no use. Is their anything that I am missing. Kindly Need help at this point .
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
02-14-2018
07:45 PM
Got this below error due to hiveserver2 went down. After Hiveserver2 was up and running the sqoop command worked.
... View more
02-13-2018
09:54 PM
Hey @Scott Shaw, Thanks for the update. Before post this issue I have already gone through the link which you have provided and its says's about. FAILED Error: java.io.IOException: SQLException in nextKeyValue at and Causedby: java.sql.SQLException:Value'0000-00-00' can not be represented as java.sql.Date But mine is about FAILED Error: java.io.IOException: SQLException in nextKeyValue at and Caused by: java.sql.SQLRecoverableException: No more data to read from socket
... View more
02-12-2018
07:50 PM
I tried to run Sqoop import from oracle db to hdp hive, it has thrown an error below. 18/02/12 07:48:11 INFO mapreduce.Job: Task Id : attempt_1510351993144_42440_m_000000_0, Status : FAILED
Error: java.io.IOException: SQLException in nextKeyValue
at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:277)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162) Caused by: java.sql.SQLRecoverableException: No more data to read from socket
at oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:456)
at oracle.jdbc.driver.DynamicByteArray.unmarshalCLR(DynamicByteArray.java:181)
at oracle.jdbc.driver.T4CMarshaller$BasicMarshaller.unmarshalBytes(T4CMarshaller.java:124)
at oracle.jdbc.driver.T4CMarshaller$BasicMarshaller.unmarshalOneRow(T4CMarshaller.java:101)
at oracle.jdbc.driver.T4CCharAccessor.unmarshalOneRow(T4CCharAccessor.java:208)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:1474)
at oracle.jdbc.driver.T4CTTIrxd.unmarshal(T4CTTIrxd.java:1282)
at oracle.jdbc.driver.T4C8Oall.readRXD(T4C8Oall.java:851)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:448)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1066)
at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3716)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.fetchMoreRows(InsensitiveScrollableResultSet.java:1015)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.absoluteInternal(InsensitiveScrollableResultSet.java:979)
at oracle.jdbc.driver.InsensitiveScrollableResultSet.next(InsensitiveScrollableResultSet.java:579)
at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:237)
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Sqoop
02-12-2018
03:14 PM
Can you please let me know where to try this?
... View more
01-25-2018
10:21 PM
Hi All, We are facing Heartbeat lost for one of our node. All nodes ambari-agent is up and running fine even ambari-server too. Done with restarting ambari-server and ambari-agent too but it dint resolved.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
-
Apache Hive
01-16-2018
07:45 PM
Hey @Montrial Harrell, We are facing this once or twice a week and we use to solve this issue by restarting the hive server. Can you let me know what and where did you increase the limit to clear this issue.
... View more
12-02-2017
05:40 PM
@Deepak Sharma in the command "/usr/hdp/current/zookeeper-client/bin/zookeeper-client -server <ZK1>:2181,<ZK2>:2181" <ZK1>:2181,<ZK2>:2181 ---- I dint understand. Could you explain it.
... View more
12-01-2017
05:43 PM
when I start HiveService2 via Ambari UI, It will start up and running well. But after after 5 mints it falls to stop status. Note:- we are able to connect hive via beeline and also able to run the querys too.
... View more
Labels:
11-29-2017
09:21 PM
1 Kudo
I am having the same issue, but hive service is up and running and also to able to connect hive via beeline. So any solution for removing the alert for hive metastore process.
... View more
11-27-2017
05:09 PM
Hi I am having the same issue, But we using hive via beeline. We are able to connect Hive via beeline and able to execute the quires
... View more
11-26-2017
05:44 PM
a. postgresql is on stlhd01thp.centene.com and status is up and running b. I have tried with upgrading but dint work for me.
... View more
11-26-2017
12:27 AM
Getting Below error. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in <module>
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure
hive(name = 'metastore')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 292, in hive
user = params.hive_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType postgres -userName hive -passWord [PROTECTED]' returned 1. WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL: jdbc:postgresql://stlhd01thp.centene.com:5432/hive
Metastore Connection Driver : org.postgresql.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version. *** schemaTool failed ***
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
11-25-2017
03:07 AM
yes, it is up and running and done with restart also.But same issue. I tried to execute manually but it shows below error /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType postgres -userName hive -passWord ************ WARNING: Use "yarn jar" to launch YARN applications.
Metastore connection URL: jdbc:postgresql://stlhd01thp.centene.com:5432/hive
Metastore Connection Driver : org.postgresql.Driver
Metastore connection User: hive
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
*** schemaTool failed ***
... View more
11-25-2017
01:59 AM
what about postgresql ??
... View more
11-24-2017
07:41 PM
Getting below error. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in <module>
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 58, in start
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 72, in configure
hive(name = 'metastore')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive.py", line 292, in hive
user = params.hive_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType postgres -userName hive -passWord [PROTECTED]' returned 128. /etc/profile: fork: Resource temporarily unavailable
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
11-14-2017
10:57 PM
When I change the permission at "/etl/wds/tmp/mr_client_master_external_test11/" and run the sqoop command, Again its going back to previous permissions.
... View more
11-14-2017
10:53 PM
Hey can you let me know how did you do it.
... View more
11-14-2017
05:05 PM
Below is the sqoop command which I am trying. sqoop import --connect "jdbc:oracle:thin:@orac-prd03-vip.healthnet.com:1725/odsprd" --username "ods" --password "Odsmr23prod$" --query "select client_id,host_system_id,client_name,insert_timestamp from CLIENT_MASTER where INSERT_TIMESTAMP>=TO_DATE('06112017','DDMMYYYY') AND \$CONDITIONS" --delete-target-dir --hive-import --hive-overwrite -m 1 --hive-table etl_wds_tmp.mr_client_master_external_test11 --target-dir "hdfs://CENTENEHADOOP2/etl/wds/tmp/mr_client_master_external_test11" --fields-terminated-by '|' And I am getting below error. "Failed with exception Unable to move source hdfs://CENTENEHADOOP2/etl/wds/tmp/mr_client_master_external_test11/part-m-00000 to destination hdfs://CENTENEHADOOP2/etl/wds/tmp/mr_client_master_external_test11/part-m-00000
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask"
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
-
Apache Sqoop
11-08-2017
08:43 PM
Hey @Jay Kumar SenSharma this issue has been solved. In the command "ambari-server sync-ldap --groups /etc/ambari-server/conf/ambari-ldap-groups.txt" the file ambari-ldap-group.txt was empty, So after adding desire group into that file it was working fine. Thanks for the help.
... View more
11-08-2017
08:42 PM
Hey @Jay Kumar SenSharma this issue has been solved. In the command "ambari-server sync-ldap --groups /etc/ambari-server/conf/ambari-ldap-groups.txt" the file ambari-ldap-group.txt was empty, So after adding desire group into that file it was working fine. Thanks for the help.
... View more
11-08-2017
04:54 PM
Yes that's true, But the problem is that I am unable to edited in the "LDAP Group Membership" Is there any other way to do it, please let me know.
... View more
11-08-2017
04:23 PM
I have sync a user by ldap "ambari-server sync-ldap --groups /etc/ambari-server/conf/ambari-ldap-groups.txt" but when the user login into Ambari UI he is getting complete bank page. I have done ambari-server restart, But issue is still the same. Need help!!!!!
... View more
Labels:
- Labels:
-
Apache Ambari
10-13-2017
03:03 PM
Sorry unable to find it.
... View more
10-12-2017
06:20 PM
could you be more specific, As I am unable to fine that file.
... View more
10-12-2017
04:52 PM
Is there any alternate way to resolve this error, without doing any restart the ambari- server or yarn server?
... View more