Member since
09-18-2015
3274
Posts
1159
Kudos Received
426
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2554 | 11-01-2016 05:43 PM | |
| 8466 | 11-01-2016 05:36 PM | |
| 4845 | 07-01-2016 03:20 PM | |
| 8157 | 05-25-2016 11:36 AM | |
| 4301 | 05-24-2016 05:27 PM |
10-28-2015
11:08 AM
1 Kudo
Please see this <property>
<name>dfs.datanode.data.dir</name>
<value>[DISK]file:///hddata/dn/disk0, [SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0</value>
</property> @Sourygna Luangsay Response edited based on the comment: Ambari 2.1.1 + supports this as per AMBARI-12601
... View more
10-28-2015
10:53 AM
@rmaruthiyodan@hortonworks.com - It's supported as far as I know. You are using zookprusr (example) for zookepper , as long as zookeeper service is up , we are good. Kafka Kerberos Doc Client { // used for zookeeper connection
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/c6401.ambari.apache.org@EXAMPLE.COM";
};
... View more
10-28-2015
10:40 AM
1 Kudo
@hosako@hortonworks.com I found this very helpful Oozie launcher is just another MapReduce job, any configuration you can set for any MapReduce job is valid for the launcher. But the most relevant and useful ones are usually the memory and queue setting (mapreduce.map.memory.mb and mapreduce.job.queuename). The way to set these for the launcher in an Oozie workflow action is to prefix “oozie.launcher” to the setting. For example, oozie.launcher.mapreduce.map.memory.mb will control the memory for the launcher mapper itself as opposed to just mapreduce.map.memory.mb which will only influence the memory setting for the underlying MapReduce job that the Hadoop, Hive, or Pig action runs. So, if you have a Hive query which requires you to increase the client side heap size when you submit the query using the Hive CLI, remember to increase the launcher mapper’s memory when you define the Oozie action for it.
... View more
10-28-2015
10:19 AM
@rmaruthiyodan@hortonworks.com I know there are customers doing that and as far as I know, its supported. Are you facing any issues?
... View more
10-27-2015
08:59 PM
Thanks @Paul Codding Link It is recommended to increase the amount of memory available to the Ambari Server when deploying multiple views. As each view requires it's own memory footprint, increasing the Ambari Server's maximum allocable memory to 4096MB will help support multiple deployed views and concurrent use. Edit the /var/lib/ambari-server/ambari-env.sh file on the Ambari Server and replace the value of the -Xmx2048m argument with -Xmx4096m -XX:PermSize=128m -XX:MaxPermSize=128m . Then, restart the Ambari Server to apply this change:
... View more
10-27-2015
08:58 PM
Where can I find the ambari views tuning guide?
... View more
Labels:
- Labels:
-
Apache Ambari
10-27-2015
05:03 PM
Yes @Wes Floyd
... View more
10-27-2015
05:02 PM
@dgarrison@hortonworks.com Based on this and yes,Spark 1.3 and HBase integration started. Link Our official blog talks about it here
... View more
10-27-2015
12:41 PM
@Jonas Straub I found this really useful Also, from Apache doc Deprecated property name mapred.min.split.size New mapreduce.input.fileinputformat.split.minsize
... View more