Member since
01-31-2015
88
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19778 | 02-09-2015 09:53 PM |
02-14-2017
09:28 AM
The file is a simple beeline hql to insert data using oozie hive2 action and we have been using this from couple of years and never faced this issue. Following is the oozie action: <action name="hive-action-prime-stage-summary-incr"> <hive2 xmlns="uri:oozie:hive2-action:0.2"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <job-xml>${hiveConfDir}/hive-site.xml</job-xml> <jdbc-url>${beeline_jdbc_url}</jdbc-url> <script>${oozie_script_path_prime}/hql/stage_summary_incr.hql</script> <param>database_destination=${primeDataBaseName}</param> <param>tenantid=${xyz}</param> <param>version_number=${version}</param> <param>database_source=${udmDataBaseName}</param> <param>hive_job_metastore_databasename=${hive_job_metastore_databasename}</param> <param>hiveUDFJarPath=${ciUDFJarPath}</param> <argument>-wpf</argument> <file>${hiveConfDir}/hive-site.xml#hive-site.xml</file> <file>${nameNode}${impala_udfs}/pf#pf</file> </hive2> <ok to="joiningS"/> <error to="kill_mail"/> </action>
... View more
02-13-2017
04:24 PM
Hi, We run hive queries using beeline action through oozie workflow
... View more
02-13-2017
04:08 PM
We are currently using CDH 5.8.3 and most of our oozie hive actions are failing frequently because of following error: ERROR : Ended Job = job_xx with exception 'java.lang.IllegalStateException(zip file closed)' java.lang.IllegalStateException: zip file closed at java.util.zip.ZipFile.ensureOpen(ZipFile.java:634) at java.util.zip.ZipFile.getEntry(ZipFile.java:305) at java.util.jar.JarFile.getEntry(JarFile.java:227) at sun.net.www.protocol.jar.URLJarFile.getEntry(URLJarFile.java:128) at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:132) at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:150) at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:233) at javax.xml.parsers.SecuritySupport$4.run(SecuritySupport.java:94) at java.security.AccessController.doPrivileged(Native Method) at javax.xml.parsers.SecuritySupport.getResourceAsStream(SecuritySupport.java:87) at javax.xml.parsers.FactoryFinder.findJarServiceProvider(FactoryFinder.java:283) at javax.xml.parsers.FactoryFinder.find(FactoryFinder.java:255) at javax.xml.parsers.DocumentBuilderFactory.newInstance(DocumentBuilderFactory.java:121) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2526) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2503) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2409) at org.apache.hadoop.conf.Configuration.get(Configuration.java:982) at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2032) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:484) at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:474) at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:210) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:604) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:602) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:602) at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:612) at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:642) at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:289) at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:549) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:435) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1782) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1539) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1318) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1127) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178) at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72) at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Please help me to resolve this error.
... View more
Labels:
- Labels:
-
Apache Hive
08-15-2016
02:04 PM
2 Kudos
In CDH 5.8.0 with spark-sql insert of data there are many .hive-staging directories getting piled up and not getting deleted or removed while the insert of data is completed successfully. Please let me know the reason for such behaviour and how should i get away with .hive-staging directory, is there any property we need to set ?
... View more
Labels:
08-15-2016
11:21 AM
3 Kudos
We are receiving lot many of these alerts when we run a lot many queries, we just moved to CDH 5.7.1, previously with same configurations on CDH 5.5.1 we were not reciving such alerts or issues, can anyone help us to know what may be the reason behing this and how to resolve it. 2.5.0+cdh5.7.1+0 The health test result for IMPALAD_QUERY_MONITORING_STATUS has become bad: There are 1 error(s) seen monitoring executing queries, and 0 errors(s) seen monitoring completed queries for this role in the previous 5 minute(s). Critical threshold: any. followed by following warnings: The health test result for IMPALA_IMPALADS_HEALTHY has become bad: Healthy Impala Daemon: 9. Concerning Impala Daemon: 0. Total Impala Daemon: 10. Percent healthy: 90.00%. Percent healthy or concerning: 90.00%. Critical threshold: 90.00%.
... View more
Labels:
07-28-2016
09:59 AM
Ok, it worked now thanks!!!
... View more
07-28-2016
09:29 AM
I am still getting same error with following too: <hive2 xmlns="uri:oozie:hive2-action:0.2"> Error: E0701 : E0701: XML schema error, cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hive2'.
... View more
07-27-2016
04:06 PM
We are using CDH 5.8, looks like oozie is 4.1, is it because of older version of oozie ?
... View more
07-27-2016
03:05 PM
We are trying to use beeline or hive2 for our jobs through oozie actions, but we are facing below error while deploying workflow: It would be helpful if someone can look into this issue. Error: Error: E0701 : E0701: XML schema error, cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hive2'. workflow.xml: <workflow-app name="abc-historic-${version_number}" xmlns="uri:oozie:workflow:0.5"> <global> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <configuration> <property> <name>mapred.job.queue.name</name> <value>${queueName}</value> </property> <property> <name>oozie.launcher.mapred.child.java.opts</name> <value>${childJavaOpts}</value> </property> </configuration> </global> <start to="hive-action-udm-opprtnty_assign-facts"/> <action name="hive-action-udm-opprtnty_assign-facts"> <hive2 xmlns="uri:oozie:hive2-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <jdbc-url>jdbc:hive2://xyz.com:10000/default</jdbc-url> <password>abc</password> <script>${nameNode}/user/xyz/workflow_sla/sls_opprtnty_assign_fact.hql</script> <param>database_destination=${hiveDatabaseDestination_udm}</param> <param>database_source=${hiveDatabaseSource_raw}</param> <param>hive_mapping_databasename=${hive_mapping_databasename}</param> <param>hiveUDFJarPath=${ciUDFJarPath}</param> <param>tenantid=${tenantId}</param> <param>batchid=0</param> <file>${hiveSiteDir}#hive-oozie-site.xml</file> </hive2> <ok to="hive-action-udm-opprtnty_assign-facts1"/> <error to="killEmail"/> </action> <action name="hive-action-udm-opprtnty_assign-facts1"> <hive2 xmlns="uri:oozie:hive2-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <jdbc-url>jdbc:hive2://xyz.com:10000/default</jdbc-url> <password>abc</password> <script>${nameNode}/user/xyz/workflow_sla/sls_opprtnty_assign_fact.hql</script> <param>database_destination=${hiveDatabaseDestination_udm}</param> <param>database_source=${hiveDatabaseSource_raw}</param> <param>hive_mapping_databasename=${hive_mapping_databasename}</param> <param>hiveUDFJarPath=${ciUDFJarPath}</param> <param>tenantid=${tenantId}</param> <param>batchid=0</param> <file>${hiveSiteDir}#hive-oozie-site.xml</file> </hive2> <ok to="hive-action-udm-opprtnty_assign-facts2"/> <error to="killEmail"/> </action> <action name="hive-action-udm-opprtnty_assign-facts2"> <hive2 xmlns="uri:oozie:hive-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <jdbc-url>jdbc:hive2://xyz.com:10000/default</jdbc-url> <password>abc</password> <script>${nameNode}/user/xyz/workflow_sla/sls_opprtnty_assign_fact.hql</script> <param>database_destination=${hiveDatabaseDestination_udm}</param> <param>database_source=${hiveDatabaseSource_raw}</param> <param>hive_mapping_databasename=${hive_mapping_databasename}</param> <param>hiveUDFJarPath=${ciUDFJarPath}</param> <param>tenantid=${tenantId}</param> <param>batchid=0</param> <file>${hiveSiteDir}#hive-oozie-site.xml</file> </hive2> <ok to="hive-action-udm-opprtnty_assign-facts3"/> <error to="killEmail"/> </action> <action name="hive-action-udm-opprtnty_assign-facts3"> <hive2 xmlns="uri:oozie:hive2-action:0.3"> <job-tracker>${jobTracker}</job-tracker> <name-node>${nameNode}</name-node> <jdbc-url>jdbc:hive2://xyz.com:10000/default</jdbc-url> <password>abc</password> <script>${nameNode}/user/xyz/workflow_sla/sls_opprtnty_assign_fact.hql</script> <param>database_destination=${hiveDatabaseDestination_udm}</param> <param>database_source=${hiveDatabaseSource_raw}</param> <param>hive_mapping_databasename=${hive_mapping_databasename}</param> <param>hiveUDFJarPath=${ciUDFJarPath}</param> <param>tenantid=${tenantId}</param> <param>batchid=0</param> <file>${hiveSiteDir}#hive-oozie-site.xml</file> </hive2> <ok to="udm-transform-end"/> <error to="killEmail"/> </action> <action name="killEmail"> <email xmlns="uri:oozie:email-action:0.1"> <to>${emailRecipients}</to> <subject>Oozie Workflow Run Error On UDM-historic Workflow</subject> <body>Oozie workflow id: ${wf:id()}, run failed.Error Message: [ ${wf:errorMessage(wf:lastErrorNode())} ] </body> </email> <ok to="kill"/> <error to="kill"/> </action> <kill name="kill"> <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message> </kill> <end name="udm-transform-end"/> <!-- <sla:info> <sla:nominal-time></sla:nominal-time> <sla:should-end>${17 * MINUTES}</sla:should-end> <sla:max-duration>${17 * MINUTES}</sla:max-duration> <sla:alert-events>duration_miss</sla:alert-events> <sla:alert-contact>xyx@gmail.com</sla:alert-contact> </sla:info> --> </workflow-app>
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
05-04-2016
10:47 AM
Ok we applied the following configuration as recommended on our Dev Cluster and are seeing no issues. property: Java Configuration Options for NodeManager: -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Xms512m -Xmx2048m -XX:PermSize=216m -XX:MaxPermSize=512m But the point to note is GC Duration warnings had never happened on this cluster, it happened on our Production cluster. So shall we go ahead and make changes on Production cluster or there are any other things we have to consider ? Please any suggestions
... View more