Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2785 | 09-01-2017 06:26 AM | |
1696 | 05-04-2017 07:09 AM | |
1459 | 09-12-2016 05:58 PM | |
2060 | 07-22-2016 05:22 AM | |
1625 | 07-21-2016 07:50 AM |
03-18-2018
12:30 PM
Also I tried to set global and then restarted hive metastore. But same error. mysql> SET GLOBAL FOREIGN_KEY_CHECKS=0; Query OK, 0 rows affected (0.00 sec) mysql> SELECT @@GLOBAL.foreign_key_checks, @@SESSION.foreign_key_checks; +-----------------------------+------------------------------+ | @@GLOBAL.foreign_key_checks | @@SESSION.foreign_key_checks | +-----------------------------+------------------------------+ | 0 |1 | +-----------------------------+------------------------------+ 1 row in set (0.00 sec) drop table i0177a_cus_hdr_hst_staging; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDOException: Exception thrown when executing query : SELECT 'org.apache.hadoop.hive.metastore.model.MStorageDescriptor' AS `NUCLEUS_TYPE`,`A0`.`INPUT_FORMAT`,`A0`.`IS_COMPRESSED`,`A0`.`IS_STOREDASSUBDIRECTORIES`,`A0`.`LOCATION`,`A0`.`NUM_BUCKETS`,`A0`.`OUTPUT_FORMAT`,`A0`.`SD_ID` FROM `SDS` `A0` WHERE `A0`.`CD_ID` = ? LIMIT 0,1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:677)
... View more
03-18-2018
12:18 PM
Thanks @Geoffrey Shelton Okot You meant to say I need to disable in mysql and then drop the table ? If yes then I tried but failing with same error: mysql> SET FOREIGN_KEY_CHECKS=0; Query OK, 0 rows affected (0.00 sec) hive> drop table i0177a_cus_hdr_hst_staging; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDOException: Exception thrown when executing query : SELECT 'org.apache.hadoop.hive.metastore.model.MStorageDescriptor' AS `NUCLEUS_TYPE`,`A0`.`INPUT_FORMAT`,`A0`.`IS_COMPRESSED`,`A0`.`IS_STOREDASSUBDIRECTORIES`,`A0`.`LOCATION`,`A0`.`NUM_BUCKETS`,`A0`.`OUTPUT_FORMAT`,`A0`.`SD_ID` FROM `SDS` `A0` WHERE `A0`.`CD_ID` = ? LIMIT 0,1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:677) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388)
... View more
03-18-2018
11:50 AM
Hello, When I am trying to drop an external table in hive then I am getting foloowing error, so can someone please help me to drop thsi table. drop table hst_staging; Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDOException: Exception thrown when executing query : SELECT 'org.apache.hadoop.hive.metastore.model.MStorageDescriptor' AS `NUCLEUS_TYPE`,`A0`.`INPUT_FORMAT`,`A0`.`IS_COMPRESSED`,`A0`.`IS_STOREDASSUBDIRECTORIES`,`A0`.`LOCATION`,`A0`.`NUM_BUCKETS`,`A0`.`OUTPUT_FORMAT`,`A0`.`SD_ID` FROM `SDS` `A0` WHERE `A0`.`CD_ID` = ? LIMIT 0,1 at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:677) at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:388) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:225) at org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:3177) at org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:3123) at org.apache.hadoop.hive.metastore.ObjectStore.dropPartitions(ObjectStore.java:1803) at sun.reflect.GeneratedMethodAccessor370.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103) at com.sun.proxy.$Proxy8.dropPartitions(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.dropPartitionsAndGetLocations(HiveMetaStore.java:1838) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1673) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1859) at sun.reflect.GeneratedMethodAccessor247.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) at com.sun.proxy.$Proxy12.drop_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:2190) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.drop_table_with_environment_context(SessionHiveMetaStoreClient.java:120) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1034) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:970) at sun.reflect.GeneratedMethodAccessor228.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:178) at com.sun.proxy.$Proxy13.dropTable(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1108) at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1045) at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:4084) at org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:3940) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:341) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1748) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1494) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1291) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1158) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1153) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197) at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76) at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:253) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:264) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) NestedThrowablesStackTrace: java.sql.BatchUpdateException: Cannot delete or update a parent row: a foreign key constraint fails ("hive"."COLUMNS_V2", CONSTRAINT "COLUMNS_V2_FK1" FOREIGN KEY ("CD_ID") REFERENCES "CDS" ("CD_ID")) at sun.reflect.GeneratedConstructorAccessor300.newInstance(Unknown Source)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Security
01-17-2018
07:21 AM
no Sandeep, We have 2.7 only by default. -bash-4.2$ python --version Python 2.7.5
... View more
01-16-2018
03:32 PM
Hello , We have installed new cluster with HDP2.6.1 and ambari 2.5.1 and for every jobs we are getting following warning message though jobs are completing successfully. But why are we getting this error , can someone please help me. 18/01/16 09:52:48 WARN ScriptBasedMapping: Exception running /etc/hadoop/conf/topology_script.py ExitCodeException exitCode=1: File "/etc/hadoop/conf/topology_script.py", line 63 print rack ^ SyntaxError: Missing parentheses in call to 'print' at org.apache.hadoop.util.Shell.runCommand(Shell.java:944) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1142) at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251) at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188) at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119) at org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101) at org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81) at org.apache.spark.deploy.yarn.SparkRackResolver.resolve(SparkRackResolver.scala:37) at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:420) at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:419) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.deploy.yarn.YarnAllocator.handleAllocatedContainers(YarnAllocator.scala:419)
... View more
Labels:
11-22-2017
02:23 PM
When users run hive query in zeppelin via jdbc interperator then it is going to some anonymous user not an actual user. INFO [2017-11-02 03:18:20,405] ({pool-2-thread-2} RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:546) – Push local angular object registry from ZeppelinServer to remote interpreter group 2CNQZ1ES5:shared_process WARN [2017-11-02 03:18:21,825] ({pool-2-thread-2} NotebookServer.java[afterStatusChange]:2058) – Job 20171031-075630_2029577092 is finished, status: ERROR, exception: null, result: %text org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException Unable to fetch table ushi_gl. org.apache.hadoop.security.AccessControlException: Permission denied: user=anonymous, access=EXECUTE, inode=”/apps/hive/warehouse/adodb.db/ushi_gl”:hive:hdfs:drwxr-x— at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:381) at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:338) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1955) at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:109) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4111) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1137) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:854) RootCause: It is bug in zeppelin 0.7.0.2 and is going to fix in newer version of zeppelin. Resolution: Add your username and password in credential option in zeppelin.
... View more
Labels:
11-08-2017
12:25 PM
Thanks a lot @Jay Kumar SenSharma
... View more
11-08-2017
12:01 PM
Hello All, We were using following query to get cpu details in hdp 2.3.4 from hbase via poenix, but I do not see any tables or schema for ambari metrics in hdp2.6.1. So do we still have this information in hbase or it is moved to some other components ? Or I am doing something wrong. select * from METRIC_RECORD where METRIC_NAME like 'cpu%' ;
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
10-30-2017
07:38 AM
Yes this specific to a big view not a small.
... View more
10-22-2017
10:50 AM
thanks a lot @anarasimham. Can you please give an example or explain me how any machine can pose to be a Knox edge node ?
... View more