Member since
11-04-2017
11
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1537 | 05-18-2018 03:45 PM | |
1295 | 05-18-2018 02:07 PM |
05-18-2018
03:45 PM
1 Kudo
@Sonny Chee You can send the status message as part of Dynamic Properties, this would be added to HTTP header. - If you found this answer has addressed your question, please take a moment to log in and click the "accept" link on the answer. Thanks Kiran
... View more
05-18-2018
02:07 PM
@Sudheer K You can certainly consume JSON messages and write to HDFS. HDFS doesn't impose what type of data should be written. - If you found this answer has addressed your question, please take a moment to log in and click the "accept" link on the answer. Thanks Kiran
... View more
05-17-2018
04:56 PM
@Abdul Rahim Please use following code case class Person(index:Long,item:String,cost:Float,Tax:Float,Total:Float) val peopleDs = sc.textFile("C:/hcubeapi/test-case1123.txt").map(_.split(",").map(_.trim)).map(attributes=> Person(attributes(0).toLong,attributes(1).toString,attributes(2).toFloat,attributes(3).toFloat,attributes(4).toFloat)).toDF()
peopleDs.createOrReplaceTempView("people") val res = spark.sql("Select * from people") res.show()
... View more
05-17-2018
01:47 PM
@Ahmad Debbas You shouldn't lose FlowFiles which are in queues as the attributes themselves are stored in FlowFile repository and content in content repository. What you see in the queue is just a reference which will be restored after the restart. You can find additional information in below link https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html
... View more
05-16-2018
02:22 PM
@Ahmad Debbas Can you please check if NiFi JVM is using UTF-8 encoding, otherwise here are a couple of approaches 1. export JAVA_TOOL_OPTIONS=-Dfile.encoding=utf8 2. Add encoding parameters to bootstrap.conf "java.arg.8=-Dfile.encoding=UTF8" Please restart NiFi after update and test to see if FetchFile/GetFile processors are working Thanks Kiran.
... View more
04-19-2018
08:50 PM
@Venkata Sudheer Kumar M Couple of things to note, 1. If hive-site.xml file is manually copied to spark2/conf folder, any Spark configuration changes from Ambari might have removed the hite-site.xml 2. As the deploy mode is cluster, you need to check if hive-site.xml and hbase-site.xml files are available under Spark conf in the driver machine and not on the machine where spark-submit command was executed.
... View more
04-17-2018
02:34 PM
@Venkata Sudheer Kumar M, I'm not sure if SPARK_YARN_DIST_FILES is a valid spark-env value, but you can pass comma separated files using spark.yarn.dist.files spark property.
... View more