Member since
01-25-2017
25
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4373 | 03-27-2017 07:57 AM |
09-06-2021
01:55 PM
Can you please elaborate what values you set for these parameters ?
... View more
08-31-2018
02:49 AM
Yes, you will have to use basically the same configuration done when using a combination of OpenLDAP and MIT KDC for authentication. The only difference is you will be using AD as your LDAP server instead of OpenLDAP, and of course you will have to consider the different schemas for users/groups (samAccountName vs uid, etc).
... View more
04-10-2017
11:53 AM
Looks like this is a better approach. I got some clear info from http://theckang.com/2015/remote-spark-jobs-on-yarn/ that matches your solution. Thanks much !
... View more
10-12-2017
08:18 PM
@gnovak @tuxnet Would resource sharing still work if ACLs are configured for separate tenant queues? If ACLs are different for Q1 and Q2, will it still support elasticity and preemption? Could you also please share the workload/application details that you used for these experiments? I am trying to run some experiments to do a similar test for elasticity and preemption of capacity schedulers. I am using a simple Spark word count application on a large file for the same, but I am not able to get a feel of resource sharing among queues using this application. Thanks in advance.
... View more
03-27-2017
09:01 AM
@ccasano I set up the queues like above. Say I have 4 queues Q1 to Q4 with min 25% and max 100%. If I start a job on Q1 and it goes up to 100% utilization and later if I launch the same task on Q2, the new task will grow only up to 25% (Absolute configured capacity) and the old one will come back to 75%. Is there a way I can equally distribute the resources here ? ie, the second job should grow beyond its minimum capacity until the queue are balanced equally. Thanks in advance !
... View more
12-06-2018
03:05 AM
I saw this error today. Apparently my hdfs file was in .csv but my table structure was in ORC.
... View more