Member since
09-24-2015
105
Posts
82
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2046 | 04-11-2016 08:30 PM | |
1691 | 03-11-2016 04:08 PM | |
1669 | 12-21-2015 09:51 PM | |
996 | 12-18-2015 10:43 PM | |
8490 | 12-08-2015 03:01 PM |
02-23-2016
06:30 PM
@jsequeiros see the updated processor configure screenshot.
... View more
01-04-2017
09:10 PM
@Predrag Minovic That jira issue has been resolved. So would it be possible to use knox now? Thanks!
... View more
03-08-2016
02:30 AM
Hey guys. The tutorial mentioned above has been updated and is also compatible with the latest Sandbox HDP 2.4. It addresses the issue of permissions. Here is the link: http://hortonworks.com/hadoop-tutorial/how-to-process-data-with-apache-hive/ When you a chance, can you go through the tutorial on our new Sandbox?
... View more
04-21-2017
03:33 PM
so my shell in the box was working yesterday, and today, I tried using it and it will not work. Is there any way to get this to work, as I prefer it because of the design and color choice.
Please Thanks.
... View more
12-15-2015
07:18 AM
1 Kudo
As @nmaillard said ,Hive places a limit on the length of text in the query that it is writing into the database.If you look at the call stack you can probably find out. So input format is also key factor in extending the hive column number.
... View more
12-22-2015
02:31 PM
As for multiple networks you can multi-home the nodes so you have a Public network and a Cluster Traffic network. Hardware vendors like the Cisco Refernce architecture are designed expecting multi-homing to be configured. https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html
... View more
10-16-2017
06:43 AM
This can be achieved by setting the following property in spark, sqlContext.setConf("mapreduce.input.fileinputformat.input.dir.recursive","true") Note here that the property is set usign sqlContext instead of sparkContext. And I tested this in spark 1.6.2 , This can be achieved by setting the following property in the spark. sqlContext.setConf("mapreduce.input.fileinputformat.input.dir.recursive","true") Note: i tested it in spark 1.6.2 Do not set this using spark context but use sqlContext to for dataframes created out of hive tables.
... View more
12-02-2015
08:30 PM
1 Kudo
for JMX params on kafka brokers side you can use kafka-env section in Ambari and restart the brokers. For kafka-produer-perf-test.sh scripts you can pass it via sh
... View more
06-01-2016
12:20 PM
Thx @bsani We have sat HA on our name nodes while we don't want the cluster to be unawailable. So is there a best practice for doing patching on a cluster that is supposed to be available 24/7. How to avoid rebalancing during patching When upgrading datanodes in chuncks, is there a way to make sure that replica of a data block is available on one of the servers alive /Claus
... View more
10-03-2016
01:26 PM
3 Kudos
@Wael Emam Its never recommended to have different OS for same host in cluster. If you still manage to bypass this for ambari agent install you will still face issue while deploying services and running applications on the host. Better is to re-install and use same version of OS. Still you can check - https://community.hortonworks.com/questions/18479/how-to-register-host-with-different-os-to-ambari.html
... View more