Member since
09-28-2015
14
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2197 | 01-07-2016 07:33 PM |
03-02-2016
01:32 PM
I wanted to know couple of things here. 1) Suppose I've few map reduce jobs and they need to be run on the HDI. What I understand from HDI approach, it is for build, run and delete. If I've placed all my jars, oozie jobs, configurations on the cluster and if I delete them today. In future if I want to run the same batch job, do I need to copy all the jars, re configure the oozie jobs? 2) Is it possible to configure Solr run on HDInsights?
... View more
01-27-2016
10:59 PM
Some of the pages require tunneling: Manage HDInsight clusters by using the Ambari Web UI https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-manage-ambari/
... View more
01-27-2016
11:02 PM
1 Kudo
Make sure you use SSL via httpS. See https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-manage-ambari/
... View more
01-27-2016
10:57 PM
Give Azure Data Factory (ADF) a try.
... View more
11-06-2015
12:33 AM
No way to kerberize Kafka in 2.2 If you are running storm-2.2 you should still be able to run a topology that uses storm-kafka connector's version from version 2.3 which should be able to read from a secure kafka cluster (HDP or not HDP).
... View more
10-28-2015
08:04 AM
1 Kudo
If you use the same partitions for yarn intermediate data than for the HDFS blocks, then you might also consider setting the fs.datanode.du.reserved property, which reserves some space on those partitions for non-hdfs use (such as intermediate yarn data). One base recommendation I saw on my first Hadoop training long time ago was to dedicate 25% of the "data disks" for that kind of intermediate data. I guess the optimal answer should consider the maximum amount of intermediate data you can get at the same time (when launching a job, do you use all the data of HDFS as input data?) and dedicate the space for yarn.nodemanager.resource.local-dirs accordingly. I would also recommend turning on the property mapreduce.map.output.compress in order to reduce the size of the intermediate data.
... View more