Member since
09-10-2014
15
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17036 | 10-15-2015 06:40 AM |
01-22-2016
03:04 AM
many thanks Wilfred that's fixed it and yes we're planning an upgrade to CDH5.5
... View more
01-11-2016
07:03 AM
hello, We are running cloudera hadoop 2.5.0-cdh5.3.0, on CentOS 6.5. trying to get job preemption running, first submitting a long running spark job with spark-submit (spark 1.5.2) to a named queue. then submitting further sqoop jobs to the cluster to another named queue, We are using Scheduling policy DRF, we have 3 queues, weighting 3:1:1, min share preemption timeout(seconds) is 60:300:300. Yarn config is set with Admin ACL = * Fair Scheduler Preemption = true Jobs are submitted and accepted by the RM; however no preemption is occurring - I am expecting to see the long running spark job interrupted and some resources diverted to the later submitted sqoop jobs. instead all i am seeing is jobs accepted and queued up. can anyone confirm this is the correct understanding and if so is there config i am missing? thanks
... View more
Labels:
01-05-2016
04:07 AM
Hello I'm having issues with the Python CM API, same as this post. I am unable to install the host using a private key isntead of password: cmd = cm.host_install(host_username, host_list, private_key="/home/ec2-user/.ssh/id_rsa", cm_repo_url=cm_repo_url) 2016-01-05 06:55:21,364 WARN NodeConfiguratorThread-21-0:com.cloudera.server.cmf.node.NodeConfigurator: Could not authenticate to ip-170-195-1-237.eu-west-1.compute.internal net.schmizz.sshj.common.SSHException: No provider available for Unknown key file I have validated keys and can connect without password: [ec2-user@ip-170-195-1-237 cloudera-scm-server]$ ssh ec2-user@ip-170-195-1-237.eu-west-1.compute.internal Last login: Tue Jan 5 06:59:46 2016 from ip-170-195-1-237.eu-west-1.compute.internal [ec2-user@ip-170-195-1-237 ~]$ Pls advise thanks
... View more
Labels:
- Labels:
-
Cloudera Manager
01-05-2016
04:06 AM
TylerHale - can you expand pls on how you passed in the private key? I have tried: cmd = cm.host_install(host_username, host_list, private_key="/home/ec2-user/.ssh/id_rsa", cm_repo_url=cm_repo_url) I have also tried passing the private key as a string variable. i,e: cmd = cm.host_install(host_username, host_list, private_key="--begin rsa--.............--end-rsa key--", cm_repo_url=cm_repo_url) thanks
... View more
10-15-2015
06:40 AM
as indicated by the warning....this was down to queue placment policy not ACLs reverted queue placment policy back to basic and now its bahving as expected
... View more
10-15-2015
03:04 AM
Hello, I am testing queue accees control using ACLs for Yarn, I have the configuration as below: Hadoop 2.3.0-cdh5.1.0 CentOS release 6.4 (Final) yarn.acl.enable true yarn.admin.acl yarn Dynamic Resource Pools: Root submission: yarn administration: anyone default submission: anyone administration: Everyone is allowed to administer this pool because of inherited settings from the parent pools. [root@hostname ~]# mapred queue -info root 15/10/15 10:46:00 INFO client.RMProxy: Connecting to ResourceManager at hostname:8032 ====================== Queue Name : root Queue State : running Scheduling Info : Capacity: 0.0, MaximumCapacity: UNDEFINED, CurrentCapacity: 0.0 You have new mail in /var/spool/mail/root [root@hostname ~]# mapred queue -showacls root.default 15/10/15 10:46:06 INFO client.RMProxy: Connecting to ResourceManager at hostname:8032 Queue acls for user : root Queue Operations ===================== root ADMINISTER_QUEUE root.default ADMINISTER_QUEUE,SUBMIT_APPLICATIONS root.queue2 ADMINISTER_QUEUE root.queue1 ADMINISTER_QUEUE root.queue4 ADMINISTER_QUEUE root.queue3 ADMINISTER_QUEUE [root@hostname ~]# However when I run an application I am seeing the following warning and errors: client log: WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:SIMPLE) cause:java.io.IOException: Failed to run job : Application rejected by queue placement policy ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Failed to run job : Application rejected by queue placement policy resource manager log: resourcemanager.RMAuditLogger: USER=root OPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE DESCRIPTION=App failed with state: FAILED PERMISSIONS=Application rejected by queue placement policy APPID=application_1444902104666_0001 resourcemanager.RMAppManager$ApplicationSummary: appId=application_1444902104666_0001,name=QueryResult.jar,user=root,queue=default,state=FAILED,trackingUrl=N/A,appMasterHost=N/A,startTime=1444902491251,finishTime=1444902491281,finalStatus=FAILED Can someone tell me pls why this is not working? from the config I understand root should be able to submit jobs to the default queue. Thanks Pete
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache YARN
-
Security
07-23-2015
02:36 AM
I am facing a similair issue, is there a definitive way of clearing up old ts files? thanks
... View more