Member since
01-04-2017
39
Posts
1
Kudos Received
0
Solutions
11-27-2017
02:57 PM
We have manually installed Metron 0.4.1 in Ubuntu 14 and able to access UI. Alert UI functionality throws 404 in the browser console. alert-ui-issues.png Management and REST API Swagger UI works fine.
... View more
11-27-2017
02:31 PM
@rmerriman Couple of Reasons, 1. Installing manually helps us to understand Metron internals better and uses of different configurations and scripts involved in it. 2. We have tried using Ambari mpack but in Ubuntu it is not working due to python scripts are using rpm and yum packages to execute commands and few commands are not supported in Ubuntu. All the installation documents are provided for CentOS except the below one, https://community.hortonworks.com/articles/88843/manually-installing-apache-metron-on-ubuntu-1404.html FYI attached screenshots shows one more issue while installing Elasticsearch which is available in the mpack, mpack-issue.png
... View more
11-27-2017
01:28 PM
The following settings in rest-application.yaml fixed the issue, parser: script.path: /usr/metron/0.4.1/bin/start_parser_topology.sh topology.options:
enrichment: script.path: /usr/metron/0.4.1/bin/start_enrichment_topology.sh indexing: script.path: /usr/metron/0.4.1/bin/start_elasticsearch_topology.sh
... View more
11-27-2017
01:24 PM
Thanks @asubramanian,
I have cleared the existing Elasticsearch indices. We have installed the Metron 0.4.1 manually in Ubuntu 14 as per the steps provided below URL,
https://community.hortonworks.com/articles/88843/manually-installing-apache-metron-on-ubuntu-1404.html
Uploaded Elasticsearch templates into ES and executed sensor-stubs. Now it is working.
... View more
11-25-2017
12:34 PM
I have installed Metron 0.4.x in Ubuntu 14. I have started REST, Metron Management and Alert UI. But Alert is always empty for any search criteria. Is there any guideline to use alert UI. Note: Data available in Elasticsearch
... View more
11-25-2017
12:29 PM
I can start Metron REST, Management and Alerts UI in Ubuntu 14. Now the problem is Metron Management UI start parser topology not working, it trigger rest API call to start the topology I have updated all the required properties in rest_application.yml except PARSER_TOPOLOGY_OPTIONS Example: -k $METRON_KAFKA -z $METRON_ZK -s dummy In that -k and -z I can mention it but parser dummy should be dynamic and I dont know how to specify that. Please help me on this Error: Nov 25, 2017 12:23:49 PM org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'PARSER_TOPOLOGY_OPTIONS' in string value "${PARSER_TOPOLOGY_OPTIONS}"] with root cause
java.lang.IllegalArgumentException: Could not resolve placeholder 'PARSER_TOPOLOGY_OPTIONS' in string value "${PARSER_TOPOLOGY_OPTIONS}"
at org.springframework.util.PropertyPlaceholderHelper.parseStringValue(PropertyPlaceholderHelper.java:174)
at org.springframework.util.PropertyPlaceholderHelper.replacePlaceholders(PropertyPlaceholderHelper.java:126)
at org.springframework.core.env.AbstractPropertyResolver.doResolvePlaceholders(AbstractPropertyResolver.java:219)
at org.springframework.core.env.AbstractPropertyResolver.resolveRequiredPlaceholders(AbstractPropertyResolver.java:193)
at org.springframework.core.env.AbstractPropertyResolver.resolveNestedPlaceholders(AbstractPropertyResolver.java:210)
at org.springframework.core.env.PropertySourcesPropertyResolver.getProperty(PropertySourcesPropertyResolver.java:83)
at org.springframework.core.env.PropertySourcesPropertyResolver.getProperty(PropertySourcesPropertyResolver.java:61)
at org.springframework.core.env.AbstractEnvironment.getProperty(AbstractEnvironment.java:530)
at org.apache.metron.rest.service.impl.StormCLIWrapper.getParserStartCommand(StormCLIWrapper.java:124)
at org.apache.metron.rest.service.impl.StormCLIWrapper.startParserTopology(StormCLIWrapper.java:54) Thanks, Uvaraj.S
... View more
11-24-2017
02:58 PM
We have followed the below link step to install Metron 0.3.x and 0.4.1 in Ubuntu 14, we have tried the couple of samples and it is working. https://community.hortonworks.com/articles/88843/manually-installing-apache-metron-on-ubuntu-1404.html But there is no clear instruction to start Metron mangement, alerts and REST api for ubuntu. When we start metron alert UI, ./start_alerts_ui We have converted the RPM to DEB using alien command but it is not installing. Like steps provided for CentOS to access Metron UI, if there is direction for Ubuntu then will be helpful. ./metron-rest start is asking for password but don't know about it. Please help me on this. Thanks, Uvaraj
... View more
11-24-2017
02:46 PM
Thanks @Jay Kumar SenSharma I have fixed the issue and elasticsearch indexing topology now working. The issue because of in elasticsearch.properties file, topology.auto-credentials=[''] It is provided empty if we are enabled kerberos then this has to be set with AutoTGT. If not just removing that property value will fix the issue, topology.auto-credentials= instead of topology.auto-credentials=[''] https://github.com/apache/metron/blob/Metron_0.4.1/metron-platform/metron-elasticsearch/src/main/config/elasticsearch.properties In all the other properties value for topology.auto-credentials is empty other than elasticsearch.properties.. Suggestion: By default it should be provided as empty like in the other properties.
... View more
11-24-2017
07:27 AM
Thanks @Jay Kumar SenSharma I have added "topology.auto-credentials"property in ambari but no luck, Do we first configure storm for that "Configuring Authorization for Storm" https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ref-736234c9-607b-429b-85ab-5dfd149abcb3.1.html Is the ambari will take of this " Please check the classes that are set to this property are included in the classpath." ? Please guide me on this. Thanks
... View more
11-24-2017
05:33 AM
We have tried Metron 0.3.1 in Ubuntu HDP cluster based on the steps provided in the below link, https://community.hortonworks.com/articles/88843/manually-installing-apache-metron-on-ubuntu-1404.html Its works. But Metron 0.3.x does not have Metron UI so we thought to switch back into Metron 0.4.x. We have followed the same steps all the topology are started except elasticsearch indexing topology, Strange class not found exception without much details, Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException:
at org.apache.storm.security.auth.AuthUtils.GetAutoCredentials(AuthUtils.java:211)
at org.apache.storm.StormSubmitter.populateCredentials(StormSubmitter.java:92)
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:214)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:310)
at org.apache.storm.flux.Flux.runCli(Flux.java:171)
at org.apache.storm.flux.Flux.main(Flux.java:98)
Caused by: java.lang.ClassNotFoundException:
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.storm.security.auth.AuthUtils.GetAutoCredentials(AuthUtils.java:203) Please find the attached log for reference. Metron UI can't able to start in Ubuntu 14.x. Direct me if there are any post related to that. Thanks, Uvaraj.S elastic-search.txt
... View more
07-04-2017
04:21 PM
I have created Spark Streaming application which receives data from Kafka topic. Batch interval : 5 Seconds Messages available in the topic: 10 Processing logic takes: 1 min Finally the output converted into JSON and Published into another Kafka Topic. Problem 1: Final output not publishing into another Kafka topic. def publishIntoKafka(predictedDataRDD: RDD[String], kafkaBroker: String): Unit = {
logger.info(s"Publishing into Kafka................")
// SEND TO KAFKA
predictedDataRDD.foreachPartition(partition =>
{
// Print statements in this section are shown in the executor's stdout logs
val kafkaOpTopic = "anomalyDetectedTest"
val props = new HashMap[String, Object]()
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBroker) props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer") val producer = new KafkaProducer[String, String](props)
partition.foreach(record => {
val data = record.toString // As as debugging technique, users can write to DBFS to verify that records are being written out val message = new ProducerRecord[String, String](kafkaOpTopic, null, data) producer.send(message) }) producer.close() }) } Problem 2: If I publish the message again after certain time period lets say after 10 seconds then that message does not consumed by Spark Streaming. The Spark UI Jobs and Stages remains same and nothing happening. I have make sure the Spark Streaming application is running in YARN in the Resource Manager UI. I did not found any errors in the YARN application log, yarn logs -applicationId application_1499166717658_0003 Help me if any one knows the solution and let me know if there is any mistakes in my side. Thanks, Uvaraj.S
... View more
07-04-2017
04:12 PM
I have created Spark Streaming application which consumes message from Kafka topic. After started the Spark Streaming application in YARN Cluster mode the Spark UI has to show the Streaming Tab to understand the metrics but it is not available. Is there any configuration available to enable Streaming tab. I can see the tab in my local machine Spark 2.0 version. Thanks, Uvaraj.S
... View more
03-27-2017
05:04 AM
Thanks for reply, @SBandaru We have verified the proxy setting and its looks fine as per the provided link. @Jay SenSharma We have applied ACL's as you have mentioned but issue still there. Please let us know if there any other ways to fix this issue.
... View more
03-24-2017
02:55 PM
I have executed Hive queries but Tez view says "No data available". When I click "History" links in RM it redirect to Tez view. Tez view shows the below error. Details:
{ "trace": "{\"exception\":\"NotFoundException\",\"message\":\"java.lang.Exception: Timeline entity { id: tez_application_1490359925913_0002, type: TEZ_APPLICATION } is not found\",\"javaClassName\":\"org.apache.hadoop.yarn.webapp.NotFoundException\"}", "message": "Failed to fetch results by the proxy from url: http://localhost:8188/ws/v1/timeline/TEZ_APPLICATION/tez_application_1490359925913_0002?_=1490365592944&user.name=admin", "status": 404
} Request:
{ "adapterName": "app", "url": "http://localhost:8080/api/v1/views/TEZ/versions/0.7.0.2.5.3.0-136/instances/TEZ_CLUSTER_INSTANCE/resources/atsproxy/ws/v1/timeline/TEZ_APPLICATION/tez_application_1490359925913_0002", "responseHeaders": {
"User": "admin",
"Server": "Jetty(8.1.19.v20160209)",
"X-Frame-Options": "SAMEORIGIN",
"Content-Length": "489",
"X-XSS-Protection": "1; mode=block",
"Content-Type": "application/json"
} } Stack:
Error: Adapter operation failed at ember$data$lib$adapters$errors$AdapterError.EmberError (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:24586:21) at ember$data$lib$adapters$errors$AdapterError (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:80222:50) at Class.handleResponse (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:81517:16) at Class.hash.error (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:81597:33)
at fire (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:3320:30) at Object.fireWith [as rejectWith] (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:3432:7) at done (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:8487:14) at XMLHttpRequest.<anonymous> (http://localhost:8080/views/TEZ/0.7.0.2.5.3.0-136/TEZ_CLUSTER_INSTANCE/assets/vendor.js:8826:9 I have checked the Application log and its seems to be fine and no error, yarn logs -applicationId application_1490359925913_0002
... View more
01-30-2017
04:46 AM
@Jasper Yes I have checked the Knox "Advanced topology", it seems fine and I can able to see Knox SSO login screen when land into Ranger Admin UI. The problem is after entered the valid username and password, it has to land into Ranger Service Manager page but it did not happen and no error in the log.
... View more
01-30-2017
04:44 AM
@apappu The link you have provided is about UI access using Knox Gateway but question I have posted about Knox SSO. So please let me know if there any resource about Knox SSO. I have followed the below link but login redirect is not working, https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_security/content/setting_up_knox_sso_for_ranger.html Thanks, Uvaraj.S
... View more
01-27-2017
06:57 AM
I have installed and started both Ranger and Knox in our cluster. Knox Ranger Plugin enabled in the Ranger and Knox SSO enabled in Ranger Advanced configuration settings. Knox SSO applied perfectly in Ranger Admin UI but when I can not able to login using the username and password in the Demo LDAP server. user: guest pass: guest-password. I have verified the knox gateway.log and could not find any error, 2017-01-27 06:48:29,286 INFO service.knoxsso (WebSSOResource.java:getAuthenticationToken(179)) - About to redirect to original URL I got the above message once click on the "Sign in" button in Knox SSO login page. Knox trying to redirect to original URL (Ranger URL) but nothing happened after that. I could not see the main page (Service Manager page) in Ranger using Knox SSO. Thanks, Uvaraj.S
... View more
01-23-2017
06:57 AM
After configured NameNode HA in Ambari, I have installed Knox service in Ambari, When I try to access WebHDFS using curl command, curl -iku guest:guest-password -X GET 'https://knok-server-host:8443/gateway/default/webhdfs/v1/?op=LISTSTATUS' I got the below Error, HTTP/1.1 403 Forbidden
Date: Mon, 23 Jan 2017 06:51:22 GMT Set-Cookie: JSESSIONID=xpjz6kvusr321aosq4ijbgcdr; Path=/gateway/default;Secure;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Set-Cookie: rememberMe=deleteMe; Path=/gateway/default; Max-Age=0; Expires=Sun, 22-Jan-2017 06:51:22 GMT Cache-Control: no-cache
Expires: Mon, 23 Jan 2017 06:51:22 GMT
Date: Mon, 23 Jan 2017 06:51:22 GMT Pragma: no-cache Expires: Mon, 23 Jan 2017 06:51:22 GMT
Date: Mon, 23 Jan 2017 06:51:22 GMT Pragma: no-cache Content-Type: application/json; charset=UTF-8
X-FRAME-OPTIONS: SAMEORIGIN
Server: Jetty(6.1.26.hwx)
Content-Length: 179 {"RemoteException":{"exception":"StandbyException","javaClassName":"org.apache.hadoop.ipc.StandbyException","message":"Operation category READ is not supported in state standby"}} Is there any changes to be made in hdfs configuration files? Note: I have installed Knox using Ambari and I hope Ambari will take care of the configuration updates.
... View more
01-21-2017
11:37 AM
I have created read only and write type datasources in Falcon using MySql database Manager. When I have test the connection in datasource it is success. Note: I have uploaded the MySQL driver into the Oozie share path as mentioned in the link, https://falcon.apache.org/0.9/ImportExport.html I have created the feed to import but Job action got failed in Oozie, I have looked into the Yarn log to find the error, java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:875)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:763)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:786)
at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:289)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:260)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:246)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:328)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1853)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1653)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:488)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:202)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:182)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) Found the parameter in YARN log, Sqoop command arguments :
import
--connect
jdbc:mysql://instance_IP/test
--table
db_raw_data
--username
sqoop_user
--password
sqoop
--num-mappers
1
--delete-target-dir
--target-dir
hdfs://cluster_URI/user/tsldp/falcon/demo/mysql-feed-import-filesystem/2017-01-21-11-00 Driver Jar is not passed as an argument which we have configured for MySQL datasource. Its looks like,Falcon Sqoop Import and Export won't work from Falcon UI because of this issue. Thanks, Uvaraj.S
... View more
01-21-2017
09:30 AM
I was trying the use case provide in the below URL using Falcon, https://dzone.com/articles/apache-falcon-defining-a-process-with-multiple-hiv 2 Input Hive Table Feed and 1 Output Hive Table Feed. Use case: table name and count the number of records based on feed_date partition column in the output table. Issue: Once I submit the input and output Hive Tables Feed. Data/Records are truncated from both Input table. What I want is, Input tables records should not be evicted/truncated. Is there any better use case like above to practice Falcon with Hive?. I can not find use cases for Falcon except Hortonwork tutorial. Thanks, Uvaraj
... View more
01-21-2017
04:42 AM
@peeyush Do we need to create _success file under the timestamped directory manually?.
... View more
01-21-2017
04:19 AM
Yes I have created the snapshot using HDFS commands and then submit the extension but the issue still exists in Falcon.
... View more
01-20-2017
01:13 PM
Falcon Snapshot extension not working when I try to do use case as per the below link, https://community.hortonworks.com/articles/63379/hdfs-snapshots-based-replication-using-apache-falc.html I have enabled snapshot to the directory, hdfs dfsadmin -allowSnapshot /tmp/falcon/HDFSSnapshot/target Even though Falcon says the "does not allow snapshots". FS impersonating user ambari-qa (HadoopClientFactory:244)
2017-01-20 14:03:27,307 ERROR - [139399371@qtp-790021811-2818 - 0f1adda7-4e8d-46b1-bb71-cc36532a71e8:ambari-qa:POST//extension/submit/HDFS-SNAPSHOT-MIRRORING] ~ Error when submitting extension job: (ExtensionManager:309)
org.apache.falcon.FalconException: sourceSnapshotDir /tmp/source does not allow snapshots.
at org.apache.falcon.extensions.mirroring.hdfsSnapshot.HdfsSnapshotMirroringExtension.validate(HdfsSnapshotMirroringExtension.java:90)
at org.apache.falcon.extensions.Extension.getEntities(Extension.java:73)
at org.apache.falcon.resource.extensions.ExtensionManager.generateEntities(ExtensionManager.java:487)
at org.apache.falcon.resource.extensions.ExtensionManager.submit(ExtensionManager.java:304)
at sun.reflect.GeneratedMethodAccessor196.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at org.apache.falcon.security.FalconAuthorizationFilter.doFilter(FalconAuthorizationFilter.java:108)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.falcon.security.FalconCSRFFilter.doFilter(FalconCSRFFilter.java:78)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.falcon.security.FalconAuthenticationFilter$2.doFilter(FalconAuthenticationFilter.java:188)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:614)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:573)
at org.apache.falcon.security.FalconAuthenticationFilter.doFilter(FalconAuthenticationFilter.java:197)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.falcon.security.FalconAuditFilter.doFilter(FalconAuditFilter.java:64)
... View more
01-19-2017
05:19 AM
I have installed the Falcon and practise the Falcon, I have got the error in Oozie for the one of the process, _success is Missing. _success, Exists? :false Then I have tried to update the Availability Flag with empty and once it is saved, still I can see _success in the Availability Flag text box. Falcon web UI, does not updating the Availability Flag and its always retain _success. Note: I have tried to edit the XML in the Falcon UI but no use.
... View more
01-19-2017
04:27 AM
Availability Flag _success can not be removed from the Falcon Web UI from the feed which I have created. HDP 2.5 version, Falcon 0.10.0. Is this the Issue with Falcon Web UI? Thanks,
... View more
01-19-2017
04:26 AM
Availability Flag _success can not be removed from the Falcon Web UI from the feed which I have created. HDP 2.5 version, Falcon 0.10.0. Is this the Issue with Falcon Web UI. Thanks, Uvaraj.S
... View more
01-18-2017
01:20 PM
I have tried the sample provided in the tutorial, http://hortonworks.com/hadoop-tutorial/defining-processing-data-end-end-data-pipeline-apache-falcon/ cleanEmailProcessTest is be in RUNNING long time. The following is the log message from "Coord Job Log". 2017-01-18 13:15:56,700 INFO CoordOldInputDependency:520 - SERVER[l] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000050-170118095241789-oozie-oozi-C] ACTION[0000050-170118095241789-oozie-oozi-C@2] [0000050-170118095241789-oozie-oozi-C@2]::ActionInputCheck:: In checkListOfPaths: hdfs://cluster/user/tsldp/falcon/demo/primary/input/enron/2017-01-18-11/_success is Missing.
2017-01-18 13:15:56,701 INFO CoordOldInputDependency:520 - SERVER[] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000050-170118095241789-oozie-oozi-C] ACTION[0000050-170118095241789-oozie-oozi-C@2] [0000050-170118095241789-oozie-oozi-C@2]::ActionInputCheck:: File:hdfs://cluster/user/tsldp/falcon/demo/primary/input/enron/2017-01-18-11/_success, Exists? :false Note: hdfs://cluster/user/tsldp/falcon/demo/primary/input/enron/2017-01-18-11/ - Folder does not have _success file. is that _success file will be automatically created (or) Do we need to create it? Not sure what is causing the issue.
... View more
01-18-2017
12:34 PM
yarn logs -applicationId <application_ID> .. Helped me to identify the issue.. It is a permission issue for the user and I have fixed it.
... View more
01-18-2017
10:35 AM
But no error message in the stderr log.. Is there any workaround to fix this issue...
... View more
01-18-2017
10:33 AM
Yes I got the below error from Oozie, 2017-01-18 09:19:56,597 WARN ShellActionExecutor:523 - SERVER[] USER[ambari-qa] GROUP[-] TOKEN[] APP[shell-wf] JOB[0000055-170117070035733-oozie-oozi-W] ACTION[0000055-170117070035733-oozie-oozi-W@shell-node] Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
... View more