Member since
10-09-2015
86
Posts
179
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
25207 | 12-29-2016 05:19 PM | |
1861 | 12-17-2016 06:05 PM | |
14782 | 08-24-2016 03:08 PM | |
2165 | 07-14-2016 02:35 AM | |
3996 | 07-08-2016 04:29 PM |
08-12-2016
06:22 PM
Hi @Deep Doradla, Can you verify if you have ssl certificate details provided in the step 7? as well as all the tags are in step7 above are in place? Thanks, Jobin
... View more
08-03-2016
03:38 AM
@Timothy Spann, Sorry for the late reply, didn't get the update as i was not tagged. Attaching it here: yarn-application-monitor.xml. Thanks, Jobin George
... View more
07-25-2016
06:07 AM
1 Kudo
Hi, If you think you will have challenges to get those processors to work, you have an option of RemoteProcessGroup using nifi site-to-site protocol to transfer data from your remote box to your cluster.
But for that you need to have an instance of nifi running on the remote server and cluster should be accessible from it. flow can be: @remote-box: getfile --> RemoteProcessGroup [with cluster url] @nifi-cluster: inputport --> [transformation-you-need] --> [upload-to-s3] you can refer below docs to set up RPG: https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#Remote_Group_Transmission https://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.1.1/bk_UserGuide/content/site-to-site.html Site to Site Properties section under https://docs.hortonworks.com/HDPDocuments/HDF1/HDF-1.1.1/bk_UserGuide/content/site-to-site.html Thanks
... View more
07-25-2016
05:07 AM
2 Kudos
Hi @Obaid Salikeen, you can use SFTP or FTP processors to pull the data from your remote box: FetchSFTP, GetFTP, GetSFTP, ListSFTP Thanks
... View more
07-14-2016
02:35 AM
2 Kudos
Hi, I don't think it matters as "/usr/hdp/current/spark-client" directory is a symlink to the "/usr/hdp/2.3.4.1-10/spark/" directory. you can verify like below: [In my screenshot hdp version is slightly different from yours] Thanks
... View more
07-11-2016
07:00 AM
1 Kudo
Hi @Prasanta Sahoo, Can you try "ConvertAvroToJSON" processor with below configuration: For "JSON container options" Instead of "none" try "array" That determines how the stream flows as an array or single object. Thanks, Jobin
... View more
07-08-2016
04:29 PM
3 Kudos
Hi @Prasanta Sahoo, Please add PutSQL processor (with DBCPConnectionPool for mysql) after ConvertJSONToSQL processor and try. Thanks, Jobin
... View more
07-05-2016
11:54 PM
1 Kudo
Hi @Zach Kirsch, I am assuming you need to update it from command line: you have two options: - Use Ambari Api and do it - Use Ambari configs.sh and get it done. (this one will be easy for you i guess) example: /var/lib/ambari-server/resources/scripts/configs.sh set node1 Hortonworks spark-defaults "spark.history.ui.port" "18090" [Note: "node1" is my server, "Hortonworks" is my cluster name. Also you will have to restart service to get changes to take effect] you can find more details here: https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations Thanks, Jobin
... View more
07-05-2016
11:41 PM
2 Kudos
Hi @Zach Kirsch, I think you are adding your custom jars via editing spark-defaults.conf from command line. Ambari will overwrite your changes when you restart the services(please note that manual editing is not compatible when you manage cluster with ambari). Instead, go to Ambari UI --> select Spark service --> select config window --> find "Custom spark-defaults" section and add your jars there. pls refer below sample screenshot: Thanks, Jobin
... View more