Member since
09-24-2015
178
Posts
113
Kudos Received
28
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3389 | 05-25-2016 02:39 AM | |
3608 | 05-03-2016 01:27 PM | |
842 | 04-26-2016 07:59 PM | |
14425 | 03-24-2016 04:10 PM | |
2032 | 02-02-2016 11:50 PM |
01-20-2016
03:47 AM
Balu - I am using the latest build - [root@sandbox ~]# cat sandbox.info
Sandbox information:
Created on: 27_10_2015_15_18_06 for vmware
Hadoop stack version: Hadoop 2.7.1.2.3.2.0-2950
Ambari Version: 2.1.2
Ambari Hash: 0ef0b7b62cf14eaaff3c5c3f416253f568f323f9
Ambari build: Release : 377
OS Version: CentOS release 6.7 (Final)
... View more
01-20-2016
03:41 AM
@niraj nagle I think you have to create those folders. Do the following from command line as root user. su - hdfs
hdfs dfs -mkdir /user/admin
hdfs dfs -chmod 755 /user/admin
hdfs dfs -chown admin:hadoop /user/admin
... View more
01-20-2016
03:18 AM
@rmolina @Shivaji
... View more
01-20-2016
03:17 AM
2 Kudos
Unable to use expression language functions in oozie workflow with Falcon. It seems some jar files are missing but unsure what. Here is the exception when using - http://hortonworks.com/hadoop-tutorial/defining-processing-data-end-end-data-pipeline-apache-falcon/ The error occurs at this step - falcon entity -type process -schedule -name rawEmailIngestProcess 2016-01-19 21:08:31,969 ERROR CoordSubmitXCommand:517 - SERVER[sandbox.hortonworks.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[-] ACTION[-] XException,
org.apache.oozie.command.CommandException: E1004: Expression language evaluation error, Unable to evaluate :${now(0,0)}:
at org.apache.oozie.command.coord.CoordSubmitXCommand.submitJob(CoordSubmitXCommand.java:259)
at org.apache.oozie.command.coord.CoordSubmitXCommand.submit(CoordSubmitXCommand.java:203)
at org.apache.oozie.command.SubmitTransitionXCommand.execute(SubmitTransitionXCommand.java:82)
at org.apache.oozie.command.SubmitTransitionXCommand.execute(SubmitTransitionXCommand.java:30)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.CoordinatorEngine.dryRunSubmit(CoordinatorEngine.java:561)
at org.apache.oozie.servlet.V1JobsServlet.submitCoordinatorJob(V1JobsServlet.java:228)
at org.apache.oozie.servlet.V1JobsServlet.submitJob(V1JobsServlet.java:95)
at org.apache.oozie.servlet.BaseJobsServlet.doPost(BaseJobsServlet.java:102)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.oozie.servlet.JsonRestServlet.service(JsonRestServlet.java:304)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.oozie.servlet.AuthFilter$2.doFilter(AuthFilter.java:171)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:595)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:554)
at org.apache.oozie.servlet.AuthFilter.doFilter(AuthFilter.java:176)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.oozie.servlet.HostnameFilter.doFilter(HostnameFilter.java:86)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:620)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.oozie.coord.CoordinatorJobException: E1004: Expression language evaluation error, Unable to evaluate :${now(0,0)}:
at org.apache.oozie.command.coord.CoordSubmitXCommand.resolveTagContents(CoordSubmitXCommand.java:1003)
at org.apache.oozie.command.coord.CoordSubmitXCommand.resolveIOEvents(CoordSubmitXCommand.java:889)
at org.apache.oozie.command.coord.CoordSubmitXCommand.resolveInitial(CoordSubmitXCommand.java:797)
at org.apache.oozie.command.coord.CoordSubmitXCommand.basicResolveAndIncludeDS(CoordSubmitXCommand.java:606)
at org.apache.oozie.command.coord.CoordSubmitXCommand.submitJob(CoordSubmitXCommand.java:229)
... 32 more
Caused by: java.lang.Exception: Unable to evaluate :${now(0,0)}:
at org.apache.oozie.coord.CoordELFunctions.evalAndWrap(CoordELFunctions.java:723)
at org.apache.oozie.command.coord.CoordSubmitXCommand.resolveTagContents(CoordSubmitXCommand.java:999)
... 36 more
Caused by: javax.servlet.jsp.el.ELException: No function is mapped to the name "now"
at org.apache.commons.el.Logger.logError(Logger.java:481)
at org.apache.commons.el.Logger.logError(Logger.java:498)
at org.apache.commons.el.Logger.logError(Logger.java:525)
at org.apache.commons.el.FunctionInvocation.evaluate(FunctionInvocation.java:150)
at org.apache.commons.el.ExpressionEvaluatorImpl.evaluate(ExpressionEvaluatorImpl.java:263)
at org.apache.commons.el.ExpressionEvaluatorImpl.evaluate(ExpressionEvaluatorImpl.java:190)
at org.apache.oozie.util.ELEvaluator.evaluate(ELEvaluator.java:204)
at org.apache.oozie.coord.CoordELFunctions.evalAndWrap(CoordELFunctions.java:714)
... 37 more
... View more
Labels:
- Labels:
-
Apache Falcon
-
Apache Oozie
01-14-2016
05:16 PM
didnt realize the question was about nifi.. my bad.
... View more
01-14-2016
05:11 PM
Assuming you are okay with using Hive for this, you would just create a table with one column (column name something like row) and then load the whole file into that table. Run a query to then split the columns and insert in another table. Here are more details and code snippet. https://martin.atlassian.net/wiki/pages/viewpage.action?pageId=21299205
... View more
01-14-2016
01:32 PM
1 Kudo
@Akshay Shingote See this question. This issue is not caused by how your workflow.xml is configured but the permissions on it. The root cause of this issue is that the user you are using to run the workflow does not have permission to read the workflow.xml. Change the permissions on workflow.xml to 777 or 755 and try again.
Also make sure that the directories (absolute path) that contains the workflow.xml also has at least 755, so that the user is able to get to the file and then read it Here is the method that is generating this error.
... View more
01-13-2016
03:49 PM
Also make sure that the directories (absolute path) that contains the workflow.xml also has at least 755, so that the user is able to get to the file and then read it.
... View more
01-13-2016
03:48 PM
2 Kudos
@Hefei Li The root cause of this issue is that the user you are using to run the workflow does not have permission to read the workflow.xml. Change the permissions on workflow.xml to 777 or 755 and try again. Here is the method that is generating this error.
... View more
01-13-2016
02:50 PM
1 Kudo
@Amit Jain Its seems to be a no-brainer for HDFS metadata to be part of Atlas and I am hopeful sometime in future it will be. However, it does not seem to be on the immediate roadmap. I see there is a patch available in community that needs more work. https://issues.apache.org/jira/browse/ATLAS-164 So, here are your options as of today - 1) Use a partner product. Here is a product that works with HDFS. http://www.waterlinedata.com/prod Here is an article that explains in more details - http://hortonworks.com/hadoop-tutorial/manage-your-data-lake-more-efficiently-with-waterline-data-inventory-and-hdp/ 2) Build a custom solution for your environment. If I was solving this issue, I would do the following - One time setup-
1- Create a HBase table (using Phoenix) to store the file name and other metadata attributes as needed. There should be a status column in this table. (HDFS_METADATA)
Changes to the script that ingests the data -
1-Run a query to upsert SQL to add an entry to HDFS_METADATA table with the status = P (Pending)
2-Copy the file to HDFS
3-Run another query to update the status to C (Complete) This HBase table can be used for querying metadata for any file. Here is a visualization tool that lets you see the HDFS disk usage. If you go down the path of building someting custom, you may be able make use of this to make the output really interesting. https://github.com/tarnfeld/hdfs-du Hope this helps.
... View more