Member since
09-10-2014
15
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17358 | 10-15-2015 06:40 AM |
03-08-2019
05:17 AM
This worked for me, the snipped below is an example for limiting the diskspace to 5gb <property> <name>firehose_time_series_storage_bytes</name> <value>5368709120</value> </property>
... View more
02-02-2016
03:30 AM
Hi Wilfred, Thanks for your response. I am using a same setup but with standard Spark 1.3. But I am seeing that even though I have set the minimum and maximum shares for a queue (both min and max memory set at 80% of available memory), if there is already a spark job running in a different queue taking 40% memory, it is never preempted! The job in the queue with 80% memeory asking for 70% of memory waits until job in the other queue is finished. It is weird that in the same setup the preemption works for Hadoop mapreduce jobs but not for spark jobs any idea? Thanks, Vinay
... View more
01-05-2016
05:19 AM
1 Kudo
bulmanp - The private_key parameter should be the contents of the private key file (in your case, the 2nd option should have worked). Here is the working code I use : f = open("/root/.ssh/id_rsa", "r") id_rsa = f.read() #print id_rsa f.close() #passwordless certificate login apicommand = cm.host_install(user_name="root", private_key=id_rsa, host_names=hostIds, cm_repo_url=cm_repo_url, java_install_strategy="NONE", unlimited_jce=True).wait()
... View more
01-05-2016
04:42 AM
2 Kudos
you will need to pass the content of the "/home/ec2-user/.ssh/id_rsa". Example: id_rsa = '' with open("/home/ec2-user/.ssh/id_rsa", 'r') as f: id_rsa = f.read() cmd = cm.host_install(host_username, host_list, private_key=id_rsa, cm_repo_url=cm_repo_url)
... View more
10-15-2015
06:40 AM
as indicated by the warning....this was down to queue placment policy not ACLs reverted queue placment policy back to basic and now its bahving as expected
... View more