Member since
02-01-2019
650
Posts
143
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2836 | 04-01-2019 09:53 AM | |
1466 | 04-01-2019 09:34 AM | |
6962 | 01-28-2019 03:50 PM | |
1581 | 11-08-2018 09:26 AM | |
3832 | 11-08-2018 08:55 AM |
08-10-2018
05:51 AM
@Gaurang Shah Is your metastore up and running? if it is up and running in the in the same machine you shouldn't be seeing this error. Do post the complete stacktrace.
... View more
08-10-2018
05:40 AM
You are right, Only one node will be active at any point of time. Here the HA is like Active and Standby. If one goes down the other one will become active. ** If this helps addressing your query, Do mark this as 'Accepted'. 🙂
... View more
08-09-2018
05:39 PM
1 Kudo
From the Apache document: http://atlas.apache.org/HighAvailability.html Implementation Details of Atlas High Availability The automatic selection of an Active instance, as well as automatic failover to a new Active instance happen through a leader election algorithm. For leader election, we use the Leader Latch Recipe of Apache Curator. The Active instance is the only one which initializes, modifies or reads state in the backend stores to keep them consistent. Also, when an instance is elected as Active, it refreshes any cached information from the backend stores to get up to date. A servlet filter ensures that only the active instance services user requests. If a passive instance receives these requests, it automatically redirects them to the current active instance. Hope this clears your doubt.
... View more
08-09-2018
09:41 AM
Right, As mentioned above the only option i see is to kill the yarn application.
... View more
08-08-2018
06:39 PM
@Takefumi Oide Taxonomy feature is being redesigned. The Apache Atlas Business Taxonomy feature, which was a Technical Preview in previous releases, is currently being redesigned and will be reintroduced in a future release. Ref: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_data-governance/content/ch_hdp_data_governance_overview.html
... View more
08-08-2018
06:26 PM
@saichand
akella Since you were trying to start with root user the file permissions would be for root. Clean up this directory /usr/local/hbase/logs and make sure your user has permission to write. Then start the Hbase.
... View more
08-08-2018
05:32 PM
@Stanislav Lysikov To stop the sparkContext user can do a sc.stop(). However if your task is stuck I don 't think you will be able to execute this statement. The only way i see is to kill the yarn application if you cannot restart the interpreter.
... View more
08-08-2018
03:50 PM
Thanks for pointing to the doc @Rahul P. The image is bit confusing(will try to get this updated). However as i said in my earlier comment Spark on Hive doesn't work out of the box. Hope this helps.
... View more
08-07-2018
12:58 PM
@saichand
akella
Don't use "sudo" in the start command. Also make sure you are able to do a password less ssh from saichanda user to all other machines(including self "ssh localhost" should not prompt for password).
... View more
08-07-2018
12:40 PM
@Shesh Kumar
Just add below lines in starting of hadoop file.(/usr/hdp/<version>/hadoop/bin/hadoop and /usr/bin/hadoop) echo "Sorry! hadoop command is disabled."
exit 1
But as mentioned by others in earlier comments there is no security here. Users which has access to this files can edit and use the hadoop commands.
... View more