Member since
11-04-2017
11
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
838 | 05-18-2018 03:45 PM | |
632 | 05-18-2018 02:07 PM |
05-28-2018
03:24 AM
@Shaik Basha In short, yes you can create Custom NiFi processors and deploy them in your enterprise NiFi instance. You can find additional details in below link https://cwiki.apache.org/confluence/display/NIFI/Maven+Projects+for+Extensions
... View more
05-18-2018
03:45 PM
1 Kudo
@Sonny Chee You can send the status message as part of Dynamic Properties, this would be added to HTTP header. - If you found this answer has addressed your question, please take a moment to log in and click the "accept" link on the answer. Thanks Kiran
... View more
05-18-2018
03:01 PM
@Andy Liang JMS specification states that acknowledgment mode property determines when the message on JMS server is deleted. The default is Client ACK mode which deletes the message on the server after NiFi session is committed. Auto ack mode removes the message on the server as soon as its delivered to NiFi, but there can be messages loss if NiFi restarts before NiFi session is committed. The last is DUPS_OK which is similar to CLIENT_ACK, but lazily acks the message, can result in Duplicate message to be delivered to NiFi. In all these cases the message will be deleted from JMS Queue/Topic. - If you found this answer has addressed your question, please take a moment to log in and click the "accept" link on the answer. Thanks Kiran.
... View more
05-18-2018
02:07 PM
@Sudheer K You can certainly consume JSON messages and write to HDFS. HDFS doesn't impose what type of data should be written. - If you found this answer has addressed your question, please take a moment to log in and click the "accept" link on the answer. Thanks Kiran
... View more
05-17-2018
06:52 PM
@Chandan Singh Check if you can close the connection from IBM MQ, that should trigger the processing of Flow Files again. Thanks Kiran.
... View more
05-17-2018
04:56 PM
@Abdul Rahim Please use following code case class Person(index:Long,item:String,cost:Float,Tax:Float,Total:Float) val peopleDs = sc.textFile("C:/hcubeapi/test-case1123.txt").map(_.split(",").map(_.trim)).map(attributes=> Person(attributes(0).toLong,attributes(1).toString,attributes(2).toFloat,attributes(3).toFloat,attributes(4).toFloat)).toDF()
peopleDs.createOrReplaceTempView("people") val res = spark.sql("Select * from people") res.show()
... View more
05-17-2018
01:47 PM
@Ahmad Debbas You shouldn't lose FlowFiles which are in queues as the attributes themselves are stored in FlowFile repository and content in content repository. What you see in the queue is just a reference which will be restored after the restart. You can find additional information in below link https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html
... View more
05-16-2018
02:22 PM
@Ahmad Debbas Can you please check if NiFi JVM is using UTF-8 encoding, otherwise here are a couple of approaches 1. export JAVA_TOOL_OPTIONS=-Dfile.encoding=utf8 2. Add encoding parameters to bootstrap.conf "java.arg.8=-Dfile.encoding=UTF8" Please restart NiFi after update and test to see if FetchFile/GetFile processors are working Thanks Kiran.
... View more
04-19-2018
08:50 PM
@Venkata Sudheer Kumar M Couple of things to note, 1. If hive-site.xml file is manually copied to spark2/conf folder, any Spark configuration changes from Ambari might have removed the hite-site.xml 2. As the deploy mode is cluster, you need to check if hive-site.xml and hbase-site.xml files are available under Spark conf in the driver machine and not on the machine where spark-submit command was executed.
... View more
04-17-2018
02:34 PM
@Venkata Sudheer Kumar M, I'm not sure if SPARK_YARN_DIST_FILES is a valid spark-env value, but you can pass comma separated files using spark.yarn.dist.files spark property.
... View more
04-09-2018
03:29 PM
@Gayathri Devi, there is no direct method available for detecting outliers, but you can use quantiles approach to determine lower and upper bounds to filter the data. After the data is filtered, you can create ML Pipelines with all the transformation required to execute machine learning models (regression, classification etc). Here is an example approach, Convert string fields to a numeric representation using StringIndexer Assemble string and numeric fields using Vector assembler Create Linear/Logistic regression model Create a ML pipeline with StringIndexerColumns, VectorAssembler and Model and execute on train data Use the trained model to make predictions on Test Data Create an evaluator and evaluate the predictions made on test data. Please note that above approach was defined based on ML library instead of MLLib Thanks Kiran
... View more