Member since
09-15-2015
457
Posts
507
Kudos Received
90
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
16850 | 11-01-2016 08:16 AM | |
12474 | 11-01-2016 07:45 AM | |
11407 | 10-25-2016 09:50 AM | |
2443 | 10-21-2016 03:50 AM | |
5107 | 10-14-2016 03:12 PM |
12-22-2015
06:35 AM
1 Kudo
Please dont do that, its not a good idea to just delete to entry in the hosts table. Even though modifying the Ambari database is inevitable sometimes, in general I would avoid this way and use the API instead.
... View more
12-22-2015
06:28 AM
3 Kudos
. @Blair Vanderlugt your Namenode is now running on a different node and the only components left on the "old" node are Atlas Metadata Server, DRPC Server, Spark History Server and Storm UI Server correct? Unfortunately, I dont think you can delete these components and just reinstall them on a different node, because these are master components. Have you tried deleting the components of this node via API or did you try to delete the complete services via API (service=Spark, components=Spark JHS, Spark Client) Could you post some of the errors you received when deleting these components/services via API? Here is what I would try to do: 1) Make a backup of your configuration, databases, etc.; Write down all the config changes you have made to Storm, Atlas, and Spark (master service on your terminated node); this might help https://community.hortonworks.com/questions/4792/a... 2) Delete the whole service including its components from the node, see this https://cwiki.apache.org/confluence/display/AMBARI... 3) Reinstall services on a new node and put the old configuration back in place
... View more
12-22-2015
06:02 AM
I have had corrupted downloads with some of my chrome versions, the latest seems to work fine though.
... View more
12-21-2015
10:20 PM
@rich seen this issue before?
... View more
12-21-2015
03:31 PM
1 Kudo
The error looks familiar 🙂 The MR application does not pass any kerberos ticket to the Solr Instance and hence the Spnego authentication is failing on the Solr side. How do you start your up, is that a custom MapReduce application or the Hadoop Job Jar that is provided with HDP-Search? SolrCloud or Solr Standalone? Can you access the SolrAdmin interface with your browser (<solr host>:8983/solr)? . @Łukasz Dywicki
... View more
12-21-2015
06:04 AM
@Ali Bajwa have you seen sth. like this before?
... View more
12-21-2015
05:41 AM
1 Kudo
what spark version is this?
... View more
12-21-2015
05:37 AM
2 Kudos
. @David Andreae If you are using Spark >= 1.4 try the following command risk_factor_spark.write.format("orc").save("risk_factor_spark")
... View more
12-18-2015
07:19 AM
2 Kudos
It looks like you are running Hive Jobs with the hive user, meaning your doAs-config (hive.server2.enable.doAs) is set to false. This flag ensures that jobs are always executed with the hive user instead of the user that is logged in. You can find some information here Is your cluster kerberized and do you have Ranger deployed? If you change the owner to the folder to hive:hdfs it should work.
... View more
12-17-2015
09:03 AM
@Alex Raj you can find the right value for peer.adr in your HBASE configuration (hbase-site.xml). The config is called hbase.zookeeper.quorum
... View more