Member since
09-15-2015
457
Posts
507
Kudos Received
90
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
15662 | 11-01-2016 08:16 AM | |
11081 | 11-01-2016 07:45 AM | |
8564 | 10-25-2016 09:50 AM | |
1918 | 10-21-2016 03:50 AM | |
3822 | 10-14-2016 03:12 PM |
12-07-2015
08:18 PM
Thats true, I was talking about compatibility not support. Let me ping our colleagues from support and see what they say 🙂
... View more
12-07-2015
07:52 PM
I think I have used MySQL 5.7 for a couple of my test clusters (at least for ambari), I cant remember any issues though. @jeff might know more Make sure 5.7 is supported by your OS, some RedHat versions ship an older MySql version and have problems with the newest ones.
... View more
12-07-2015
05:39 AM
3 Kudos
@rmaruthiyodan the steps look fine to me. I actually had to do this myself recently (see this article https://community.hortonworks.com/articles/4632/changing-dfsnameservices-value-after-hdfs-ha-has-b.html). I didn't encounter any issues after changing the logical name. Just make sure you also update the Hive Metastore and I think if you use HBase there are some additional changes necessary.
... View more
12-04-2015
09:30 AM
2 Kudos
Hi sprasad, the documentation should be fine in regards to enabling HDFS compression, but I agree, the config params (or at least the names) are deprecated. The old config params are still supported and valid, however you should switch to the new names. Here is a list of deprecated values and their new names: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/DeprecatedProperties.html To turn on HDFS compression using the new params, use the following configuration: core-site.xml <property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,
org.apache.hadoop.io.compress.SnappyCodec</value>
<description>A list of the compression codec classes that can be used
for compression/decompression.</description>
</property> mapred-site.xml <property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.GzipCodec</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.type</name>
<value>BLOCK</value>
</property> (Optional) Job output compression, mapred-site.xml <property>
<name>mapreduce.output.fileoutputformat.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.codec</name>
<value>org.apache.hadoop.io.compress.GzipCodec</value>
</property>
... View more
12-03-2015
10:07 PM
@Guilherme Braccialli I reached out to Andrew regarding this issue yesterday, havent heard from him. I'll ping him again and open a support ticket. I need Atlas for a Demo so a patch would be nice 🙂 Keep you posted.
... View more
12-03-2015
09:57 PM
Interesting, according to ATLAS-16 this issue has been fixed, but according to HADOOP-11461 its still open ....
... View more
12-03-2015
09:56 PM
3 Kudos
I have seen this in my Atlas logs as well. See this https://issues.apache.org/jira/browse/ATLAS-16 Apparently this issue was already fixed back in August, I dont know why it was not released with 2.3.2.
... View more
12-03-2015
09:07 PM
@kkane important update on the above answer. At the moment hostgroup variables are not replaced by the actual hostnames, e.g. %HOSTGROUP::hg_master_node_3% is not replaced by c6603.ambari.apache.org. @Olivier Renault pointed this out to me today (Thanks!!). A RMP ticket has already been opened for this missing feature and the implementation is currently planned for one of the next major Ambari versions. One way to work around this missing piece is to replace the hostgroup variable with the actual hostname.
... View more
12-03-2015
02:57 PM
1 Kudo
You can use collection.configName, however make sure you have already uploaded the configuration to your Zookeeper znode. If you dont use collection.configName, Solr asumes the configuration is stored under the name of the collection.
... View more