Member since
09-14-2016
36
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2274 | 02-22-2017 10:35 PM |
02-21-2017
04:13 AM
Is that on all the agents or on ambari server machine? I'm assuming _CLIENT components aren't needed. Thanks.
... View more
02-21-2017
01:37 AM
Encrypting passwords. I'm trying to install hdp via blueprints (with kerberos). I need to call ambari-server setup-security then option #2 (pass/pass). Is there a way to script this? I'm doing this so i can pass PERSISTED for kerberos cred
... View more
Labels:
- Labels:
-
Apache Ambari
02-20-2017
10:57 PM
Thanks for the quick response. How does the sandbox actually start all the services? Is there an easier way vs having to keep track of all the services added/removed etc. ? I read somewhere there is some option in ambari.properties (but i can't seem to find anything) ?
... View more
02-20-2017
10:46 PM
I have a VM with docker containers running HDP. If I shutdown the VM and restart - the containers come up along with ambar-server and agent, but I have to manally start each component. Is there an easy way to make all components start up in order?
... View more
Labels:
- Labels:
-
Apache Ambari
02-16-2017
08:12 PM
Thanks @Matt Clarke I understand HDF is separate. I'm looking for the best approach to use existing HDP with HDF. I'll look into ambari-nifi-service again - i didn't pursue this service because of the disclaimer about it being for demo purposes. Anyone using this service - is it up to date with 2.4? Assuming i don't care about upgradability at the moment... Prefer to add nifi cluster to existing HDP kerberized cluster. any ideas/posts/links on how to best do that would be helpful. Is the best approach to setup a whole new ambari system? I would like to avoid doing this. Thanks
... View more
02-16-2017
02:47 AM
We have an existing HDP cluster with the typical infrastructure setup: hive/spark/oozie/ranger (all kerberized). What is the best way to integrate a nifi cluster to this cluster? Keeping in mind upgrading and easy existing integration with security etc.
... View more
Labels:
01-06-2017
09:38 PM
@Shashang Sheth
I'm not sure what you mean. Basically that error from oozie can mean a number of things. e.g. the table actually doesn't exist, if the classpath isn't set correctly it will also give table not found, or if the metastore uris aren't set correct oozie may still fail with table not found. Can you explain what "solution" worked for you? Thanks
... View more
12-09-2016
11:35 PM
Thanks @bikas so if the spark queue capacity is 40% and max capacity is 50% and user limit is 1 - then in theory regardless of what options are set during spark submit the spark job should never use resources above 50%...correct? Preemption is enabled here is the config: yarn.resourcemanager.monitor.enable=true
yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill=15000
yarn.resourcemanager.monitor.capacity.preemption.monitor_interval=3000
yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round=0.1
yarn.resourcemanager.scheduler.monitor.policies=org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
ordering policy for both queues is 'fair' and Enable Size Based Weight Ordering is not enabled. I will try out a few more tests and make sure it really is utilizing resources it shouldn't. If something else seems off in our config - let me know...
... View more
12-09-2016
12:35 AM
Thanks Predrag, We want it so a spark job can't take the cluster down. It seems spark ignores the ambari queue properties. Even with max capacity for spark jobs set to 50%, if a developer runs a job with many executors and high mem per executor, our HDP cluster becomes unstable. This shouldn't be possible (coming from cloudera - fair scheduling). We can work with the spark team, but seems we are missing something fundamental since no single job/person should be able to lock resources like this. yarn memory goes to 100%+
... View more
- « Previous
-
- 1
- 2
- Next »