Member since
02-18-2016
135
Posts
19
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1807 | 12-18-2019 07:44 PM | |
1837 | 12-15-2019 07:40 PM | |
735 | 12-03-2019 06:29 AM | |
754 | 12-02-2019 06:47 AM | |
1702 | 11-28-2019 02:06 AM |
11-12-2019
01:53 AM
1. Is the job failed due to above reason? If "NO", then is the error occurring in logs eveything for other BP XXX also? 2. Can you check using fsck which nodes had copied of the BP specified above?
... View more
11-12-2019
01:04 AM
Hi Mike, Can you do quick check below - **BP-484874736-172.2.45.23-8478399929292:blk_1081495827_7755233 does not exist or is not under Construction >> 1. Are all Datanodes up and running fine within cluster 2. Check on the NN UI and see if any Datanode is NOT reporting blocks in Datanode tab or any Missing blocks reported on NN UI 3. You can run fsck [unless cluster is huge and loaded with data] and check of the block exist and which all nodes has the replica. It might help to drill down the issue.
... View more
11-11-2019
11:56 PM
Hi Vinay, Do you see any error in logs while running "reassign partition tool" ? This might help to debug issue. Were all the brokers healthy and ISR were good before you ran the tool? ***When I ran this tool, it was stuck with one partition and it hung there for more than a day. The Cluster performance was severely impacted, and we had to restart the entire cluster. >> I can suggest if the data/topics are more then probably you can do reassignment of subset of topics to avoid load on the cluster. You can provide a list of topics that should be moved to the new set of brokers and a target list of new brokers. ***I don't see a way even to stop the tool when its taking long time. >> You can abort the assignment by deleting the "/admin/reassign_partitions" zk node on your zookeeper cluster using zookeeper shell, and move the partitions that are assigned to the dead broker to new nodes. Thanks Sagar S
... View more
09-26-2018
04:21 PM
I tried below process and it worked - Stop AMS Moved contents of AMS "tmp.dir" to backup Moved contents of AMS "root.dir" to backup removed ams znode from zookeeper started AMS AMS is working fine now.
... View more
09-04-2018
03:24 PM
Problem Statement : We recently upgraded our AMbari and HDP to latest version. As pre-requisites while ambari upgrade we missed to upgrade ambari-infra rpm/package. We did HDP upgrade and realized as ambari-infra was not upgraded. So we upgraded ambari-infra package on respective node. When checking in Ranger UI, I am not able to see ranger audits and its giving error - 2018-09-03 12:47:06,891 [http-bio-6080-exec-18] ERROR org.apache.ranger.solr.SolrUtil (SolrUtil.java:161) - Error running solr query. Query = q=*:*&fq=evtTime:[2018-09-02T16:00:00Z+TO+NOW]&sort=evtT
ime+desc&start=0&rows=25&_stateVer_=ranger_audits:542, response = null
2018-09-03 12:47:06,892 [http-bio-6080-exec-18] INFO org.apache.ranger.common.RESTErrorUtil (RESTErrorUtil.java:63) - Request failed. loginId=admin, logMessage=Error running solr query, please chec
k solr configs. Could not find a healthy node to handle the request.
javax.ws.rs.WebApplicationException Can you help to resolve this issue. Attached xa_portal.logxa-portal.txt
... View more
Labels:
- Labels:
-
Apache Ranger
-
Apache Solr
08-24-2018
01:22 PM
@pjoseph @Nanda Kumar pls share your views
... View more
08-24-2018
09:49 AM
Problem Statement: Few nodemanager in cluster are shutting down/getting crashed with below error - 2018-08-24 09:37:31,583 INFO nodemanager.LinuxContainerExecutor (LinuxContainerExecutor.java:deleteAsUser(537)) - Deleting absolute path : /data07/hadoop/yarn/local/usercache/XXX/appcache/applic
ation_1533656250055_31336
2018-08-24 09:37:31,583 INFO nodemanager.LinuxContainerExecutor (LinuxContainerExecutor.java:deleteAsUser(537)) - Deleting absolute path : /data08/hadoop/yarn/local/usercache/XXX/appcache/applic
ation_1533656250055_31336
2018-08-24 09:37:31,583 INFO nodemanager.LinuxContainerExecutor (LinuxContainerExecutor.java:deleteAsUser(537)) - Deleting absolute path : /data10/hadoop/yarn/local/usercache/XXX/appcache/applic
ation_1533656250055_31336
2018-08-24 09:37:31,583 INFO nodemanager.LinuxContainerExecutor (LinuxContainerExecutor.java:deleteAsUser(537)) - Deleting absolute path : /data09/hadoop/yarn/local/usercache/XXX/appcache/applic
ation_1533656250055_31336
2018-08-24 09:37:33,138 FATAL yarn.YarnUncaughtExceptionHandler (YarnUncaughtExceptionHandler.java:uncaughtException(51)) - Thread Thread[Container Monitor,5,main] threw an Error. Shutting down now
...
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.io.BufferedReader.<init>(BufferedReader.java:105)
at java.io.BufferedReader.<init>(BufferedReader.java:116)
at org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.constructProcessInfo(ProcfsBasedProcessTree.java:554)
at org.apache.hadoop.yarn.util.ProcfsBasedProcessTree.updateProcessTree(ProcfsBasedProcessTree.java:225)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl$MonitoringThread.run(ContainersMonitorImpl.java:445)
2018-08-24 09:37:33,145 INFO nodemanager.LinuxContainerExecutor (LinuxContainerExecutor.java:deleteAsUser(542)) - Deleting path : /data01/hadoop/yarn/log/application_1533656250055_31336/container_e
92_1533656250055_31336_01_000001/directory.info Ambari Version:
2.4.2.0 HDP Version: 2.5.3.0 Analysis: From Ambari-Yarn configs I see that the Node Manager Heap is set to 1GB. I see few links which says increasing Heap to 2GB resolves the issue. Eg - http://www-01.ibm.com/support/docview.wss?uid=swg22002422 Suggestion/Help expecting: 1. Can you guide on how to debug this GC error further for RCA? Do you thing enabling GC log and Using "Jconsole" tool we can debug the jobs - why and where its using more heap/memory ? 2. How can we confirm that 1GB heap is not correct size for the cluster before I proceed it increasing to 2GB. 3. Also how can i make sure increasing to 2GB i am not going to hit GC issue again? Is there any forecasting I can do here to prevent the issue from happening in future? Please do let me know if you need any more details.
... View more
Labels:
- Labels:
-
Apache YARN
08-23-2018
12:56 PM
Nice and very useful Article @Rajkumar Singh ..
... View more
03-23-2018
06:36 AM
Environment : Ambari 2.5.2 HDP - 2.6.2-14 I am facing issue while running shell action while running from oozie. shell action contains hive llap sample query - [insert <something> on table...] , but the jobs stucks in PREP and Yarn UI says - "Waiting for allocating container" I tried running sample shell action as per below url. https://github.com/dbist/oozie-examples/tree/master/apps/shell Can you help with debug steps.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
03-01-2018
08:28 PM
1 Kudo
@Veerendra Nath Jasthi From the above error it seems the issue is with tag "versionxxxxx". tag 'version1519845495539' exists for 'capacity-scheduler'" Just update "tag" "version1519845495539" in curl command to some random number eg. version151984546666 Please retry and let me know if still any issue.
... View more
03-01-2018
06:25 AM
@Tim Veil Please refer link for working command to achieve queue addition using script / in automated way - https://community.hortonworks.com/questions/155903/how-to-add-new-yarn-queue-using-rest-api-ambari-co.html?childToView=174665#answer-174665
... View more
03-01-2018
06:22 AM
@pjoseph I was able to achieve this using ambari api updating service configs. Below is the working command - I have added Queue name - "MaxiqQueue" curl -u $ambari_user:$ambari_password -H 'X-Requested-By:admin' -X PUT "http://$ambari_server_host:8080/api/v1/clusters/$CLUSTER_NAME" -d '{
"Clusters": {
"desired_config": {
"type": "capacity-scheduler",
"tag": "version'$date'",
"properties": {
"yarn.scheduler.capacity.maximum-am-resource-percent" : "0.2",
"yarn.scheduler.capacity.maximum-applications" : "10000",
"yarn.scheduler.capacity.node-locality-delay" : "40",
"yarn.scheduler.capacity.queue-mappings-override.enable" : "false",
"yarn.scheduler.capacity.resource-calculator" : "org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator",
"yarn.scheduler.capacity.root.MaxiqQueue.acl_administer_queue" : "*",
"yarn.scheduler.capacity.root.MaxiqQueue.acl_submit_applications" : "*",
"yarn.scheduler.capacity.root.MaxiqQueue.capacity" : "90",
"yarn.scheduler.capacity.root.MaxiqQueue.maximum-capacity" : "90",
"yarn.scheduler.capacity.root.MaxiqQueue.minimum-user-limit-percent" : "100",
"yarn.scheduler.capacity.root.MaxiqQueue.ordering-policy" : "fifo",
"yarn.scheduler.capacity.root.MaxiqQueue.state" : "RUNNING",
"yarn.scheduler.capacity.root.MaxiqQueue.user-limit-factor" : "1",
"yarn.scheduler.capacity.root.accessible-node-labels" : "*",
"yarn.scheduler.capacity.root.acl_administer_queue" : "yarn",
"yarn.scheduler.capacity.root.capacity" : "100",
"yarn.scheduler.capacity.root.default.acl_administer_queue" : "yarn",
"yarn.scheduler.capacity.root.default.acl_submit_applications" : "yarn",
"yarn.scheduler.capacity.root.default.capacity" : "10",
"yarn.scheduler.capacity.root.default.maximum-capacity" : "100",
"yarn.scheduler.capacity.root.default.state" : "RUNNING",
"yarn.scheduler.capacity.root.default.user-limit-factor" : "1",
"yarn.scheduler.capacity.root.queues" : "MaxiqQueue,default"
}
}
}
}'
... View more
03-01-2018
12:30 AM
@Jay Kumar SenSharma It worked. thanks you alot for quick help.
... View more
02-28-2018
01:14 PM
1 Kudo
Hi Team, This is resolved. I corrected the command which resolved the issue. Below is the working example - curl -u admin:admin -H 'X-Requested-By:admin' -X PUT 'http://localhost:8080/api/v1/clusters/hdptest' -d '{
"Clusters": {
"desired_config": {
"type": "topology",
"tag": "version2480211312386666",
"properties": {
"content" : " <topology>\n\n <gateway>\n\n <provider>\n <role>authentication</role>\n <name>ShiroProvider</name>\n <enabled>true</enabled>\n <param>\n <name>sessionTimeout</name>\n <value>30</value>\n </param>\n <param>\n <name>main.ldapRealm</name>\n <value>org.apache.shiro.realm.ldap.JndiLdapRealm</value>\n </param>\n <param>\n <name>main.ldapRealm.userDnTemplate</name>\n <value>uid={0},ou=People,dc=example,dc=com</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.url</name>\n <value>ldap://192.168.56.111:389</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.authenticationMechanism</name>\n <value>simple</value>\n </param>\n <param>\n <name>urls./**</name>\n <value>authcBasic</value>\n </param>\n </provider>\n\n <provider>\n <role>identity-assertion</role>\n <name>Default</name>\n <enabled>true</enabled>\n </provider>\n\n <provider>\n <role>authorization</role>\n <name>AclsAuthz</name>\n <enabled>true</enabled>\n </provider>\n\n </gateway>\n\n <service>\n <role>NAMENODE</role>\n <url>hdfs://{{namenode_host}}:{{namenode_rpc_port}}</url>\n </service>\n\n <service>\n <role>JOBTRACKER</role>\n <url>rpc://{{rm_host}}:{{jt_rpc_port}}</url>\n </service>\n\n <service>\n <role>WEBHDFS</role>\n {{webhdfs_service_urls}}\n </service>\n\n <service>\n <role>WEBHCAT</role>\n <url>http://{{webhcat_server_host}}:{{templeton_port}}/templeton</url>\n </service>\n\n <service>\n <role>OOZIE</role>\n <url>http://{{oozie_server_host}}:{{oozie_server_port}}/oozie</url>\n </service>\n\n <service>\n <role>WEBHBASE</role>\n <url>http://{{hbase_master_host}}:{{hbase_master_port}}</url>\n </service>\n\n <service>\n <role>HIVE</role>\n <url>http://{{hive_server_host}}:{{hive_http_port}}/{{hive_http_path}}</url>\n </service>\n\n <service>\n <role>RESOURCEMANAGER</role>\n <url>http://{{rm_host}}:{{rm_port}}/ws</url>\n </service>\n </topology>"
}
}
}
}'
... View more
02-28-2018
12:40 PM
@rguruvannagari @Jay Kumar SenSharma can you pls help.
... View more
02-28-2018
12:39 PM
Hi Team, I am trying to modify Knox -> Configs->Advance Topology using below link but no luck. https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations I tried creating command below as per above link - $curl -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '[{"Clusters":{
"desired_config":[{
"type":"toppology",
"tag":"version1519800743769",
"properties":{
"content" : " <topology>\n\n <gateway>\n\n <provider>\n <role>authentication</role>\n <name>ShiroProvider</name>\n <enabled>true</enabled>\n <param>\n <name>sessionTimeout</name>\n <value>30</value>\n </param>\n <param>\n <name>main.ldapRealm</name>\n <value>org.apache.shiro.realm.ldap.JndiLdapRealm</value>\n </param>\n <param>\n <name>main.ldapRealm.userDnTemplate</name>\n <value>uid={0},ou=People,dc=lti,dc=com</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.url</name>\n <value>ldap://192.168.56.111:389</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.authenticationMechanism</name>\n <value>simple</value>\n </param>\n <param>\n <name>urls./**</name>\n <value>authcBasic</value>\n </param>\n </provider>\n\n <provider>\n <role>identity-assertion</role>\n <name>Default</name>\n <enabled>true</enabled>\n </provider>\n\n <provider>\n <role>authorization</role>\n <name>AclsAuthz</name>\n <enabled>true</enabled>\n </provider>\n\n </gateway>\n\n <service>\n <role>NAMENODE</role>\n <url>hdfs://{{namenode_host}}:{{namenode_rpc_port}}</url>\n </service>\n\n <service>\n <role>JOBTRACKER</role>\n <url>rpc://{{rm_host}}:{{jt_rpc_port}}</url>\n </service>\n\n <service>\n <role>WEBHDFS</role>\n {{webhdfs_service_urls}}\n </service>\n\n <service>\n <role>WEBHCAT</role>\n <url>http://{{webhcat_server_host}}:{{templeton_port}}/templeton</url>\n </service>\n\n <service>\n <role>OOZIE</role>\n <url>http://{{oozie_server_host}}:{{oozie_server_port}}/oozie</url>\n </service>\n\n <service>\n <role>WEBHBASE</role>\n <url>http://{{hbase_master_host}}:{{hbase_master_port}}</url>\n </service>\n\n <service>\n <role>HIVE</role>\n <url>http://{{hive_server_host}}:{{hive_http_port}}/{{hive_http_path}}</url>\n </service>\n\n <service>\n <role>RESOURCEMANAGER</role>\n <url>http://{{rm_host}}:{{rm_port}}/ws</url>\n </service>\n </topology>"
},
"service_config_version_note":"New config version"}]}}]' 'http://localhost:8080/api/v1/clusters/hdptest'
Also i tried to create test.json and PUT it - [{"Clusters":{
"desired_config":[{
"type":"toppology",
"tag":"version2480557386666",
"properties":{
"content" : " <topology>\n\n <gateway>\n\n <provider>\n <role>authentication</role>\n <name>ShiroProvider</name>\n <enabled>true</enabled>\n <param>\n <name>sessionTimeout</name>\n <value>30</value>\n </param>\n <param>\n <name>main.ldapRealm</name>\n <value>org.apache.shiro.realm.ldap.JndiLdapRealm</value>\n </param>\n <param>\n <name>main.ldapRealm.userDnTemplate</name>\n <value>uid={0},ou=People,dc=lti,dc=com</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.url</name>\n <value>ldap://192.168.56.111:389</value>\n </param>\n <param>\n <name>main.ldapRealm.contextFactory.authenticationMechanism</name>\n <value>simple</value>\n </param>\n <param>\n <name>urls./**</name>\n <value>authcBasic</value>\n </param>\n </provider>\n\n <provider>\n <role>identity-assertion</role>\n <name>Default</name>\n <enabled>true</enabled>\n </provider>\n\n <provider>\n <role>authorization</role>\n <name>AclsAuthz</name>\n <enabled>true</enabled>\n </provider>\n\n </gateway>\n\n <service>\n <role>NAMENODE</role>\n <url>hdfs://{{namenode_host}}:{{namenode_rpc_port}}</url>\n </service>\n\n <service>\n <role>JOBTRACKER</role>\n <url>rpc://{{rm_host}}:{{jt_rpc_port}}</url>\n </service>\n\n <service>\n <role>WEBHDFS</role>\n {{webhdfs_service_urls}}\n </service>\n\n <service>\n <role>WEBHCAT</role>\n <url>http://{{webhcat_server_host}}:{{templeton_port}}/templeton</url>\n </service>\n\n <service>\n <role>OOZIE</role>\n <url>http://{{oozie_server_host}}:{{oozie_server_port}}/oozie</url>\n </service>\n\n <service>\n <role>WEBHBASE</role>\n <url>http://{{hbase_master_host}}:{{hbase_master_port}}</url>\n </service>\n\n <service>\n <role>HIVE</role>\n <url>http://{{hive_server_host}}:{{hive_http_port}}/{{hive_http_path}}</url>\n </service>\n\n <service>\n <role>RESOURCEMANAGER</role>\n <url>http://{{rm_host}}:{{rm_port}}/ws</url>\n </service>\n </topology>"
},
"service_config_version_note":"New config version"}]}}]
and executed below command - curl -H 'X-Requested-By:ambari' -u admin:admin -X PUT --data @test.json http://localhost:8080/api/v1/clusters/hdptest/ But no luck. Can you please let me know how can i achieve this?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Knox
02-28-2018
06:12 AM
@Deepak Sharma Thanks. Will try and revert.
... View more
02-28-2018
05:54 AM
@rguruvannagari @Jay Kumar SenSharma
... View more
02-28-2018
05:54 AM
I am trying to automate ranger ldap integration. I am stuck at on how to drop "ranger.usersync.ldap.ldapbindpassword" value using ambari "config.sh" script ? Can you please guide on how to achieve this ? I tried to check in working cluster which has ranger ldap integrated and the output is as below - # ./configs.sh -u admin -p admin -port 8080 get `hostname` hdptest ranger-ugsync-site |grep ldapbind
"ranger.usersync.ldap.ldapbindpassword" : "SECRET:ranger-ugsync-site:11:ranger.usersync.ldap.ldapbindpassword",
Not sure how the password is stored or taken in the value.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
02-23-2018
11:50 AM
@Kuldeep Kulkarni Add "deploy JCE policies" steps as prerequisites. I tried without JCE and it fails for me. Let me know if i am missing anything.
... View more
02-23-2018
04:31 AM
@spolavarapu Found this as a BUG - https://issues.apache.org/jira/browse/RANGER-1615?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel Can you confirm if this is fix in latest version of Ranger 0.7 ?
... View more
02-20-2018
05:18 AM
@GN_Exp Can you pass below details - 1. Ranger install.properties 2. ranger ugsync install.properties 3. output of - $ldapsearch -x -b "dc=example,dc=com" [replace example with your domain name]
... View more
02-08-2018
07:16 AM
I see this error is same as BUG reported - https://issues.apache.org/jira/browse/AMBARI-20264 But i am using ambari 2.6.0.1
... View more
02-08-2018
06:50 AM
Current Settings : Interactive Query Enable Interactive Query (requires YARN pre-emption) YesNo
Interactive Query Queue
Number of nodes used by Hive's LLAP
1
1
Maximum Total Concurrent Queries
2
1
4
7
10
Memory per Daemon 20480
In-Memory Cache per Daemon 4096
Number of executors per LLAP Daemon 9
... View more
02-08-2018
06:47 AM
@cravani @Sindhu 2018-02-08 06:22:00,735 - LLAP status command : /usr/hdp/current/hive-server2-hive2/bin/hive --service llapstatus -w -r 0.8 -i 2 -t 500
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.14-5/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.14-5/hive2/auxlib/phoenix-4.7.0.2.6.2.14-5-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.14-5/hive2/auxlib/phoenix-4.7.0.2.6.2.14-5-hive.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.2.14-5/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
LLAPSTATUS WatchMode with timeout=500 s
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009.
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
FAILED container: container_e08_1518066639207_0009_01_000002, Logs at: http://ip-192-168-180-102.ca-central-1.compute.internal:8042/node/containerlogs/container_e08_1518066639207_0009_01_000002/hive
--------------------------------------------------------------------------------
LLAP Starting up with AppId=application_1518066639207_0009. Started 0/1 instances
FAILED container: container_e08_1518066639207_0009_01_000002, Logs at: http://ip-192-168-180-102.ca-central-1.compute.internal:8042/node/containerlogs/container_e08_1518066639207_0009_01_000002/hive
--------------------------------------------------------------------------------
{
"amInfo" : {
"appName" : "llap0",
"appType" : "org-apache-slider",
"appId" : "application_1518066639207_0009",
"containerId" : "container_e08_1518066639207_0009_01_000001",
"hostname" : "ip-192-168-180-102.ca-central-1.compute.internal",
"amWebUrl" : "http://ip-192-168-180-102.ca-central-1.compute.internal:40207/"
},
"state" : "LAUNCHING",
"originalConfigurationPath" : "hdfs://ip-192-168-180-189.ca-central-1.compute.internal:8020/user/hive/.slider/cluster/llap0/snapshot",
"generatedConfigurationPath" : "hdfs://ip-192-168-180-189.ca-central-1.compute.internal:8020/user/hive/.slider/cluster/llap0/generated",
"desiredInstances" : 1,
"liveInstances" : 0,
"appStartTime" : 1518070925860,
"runningThresholdAchieved" : false,
"completedInstances" : [ {
"hostname" : "ip-192-168-180-102.ca-central-1.compute.internal",
"containerId" : "container_e08_1518066639207_0009_01_000002",
"logUrl" : "http://ip-192-168-180-102.ca-central-1.compute.internal:8042/node/containerlogs/container_e08_1518066639207_0009_01_000002/hive",
"yarnContainerExitStatus" : 0
} ]
}
WARN cli.LlapStatusServiceDriver: Watch timeout 500s exhausted before desired state RUNNING is attained.
2018-02-08 06:30:27,360 - LLAP app 'llap0' current state is LAUNCHING.
2018-02-08 06:30:27,360 - LLAP app 'llap0' current state is LAUNCHING.
2018-02-08 06:30:27,360 - LLAP app 'llap0' deployment unsuccessful.
2018-02-08 06:30:27,360 - Stopping LLAP
2018-02-08 06:30:27,360 - call[['slider', 'stop', u'llap0']] {'logoutput': True, 'user': 'hive', 'stderr': -1}
2018-02-08 06:30:28,469 [main] WARN shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2018-02-08 06:30:28,476 [main] INFO client.RMProxy - Connecting to ResourceManager at ip-192-168-180-228.ca-central-1.compute.internal/192.168.180.228:8050
2018-02-08 06:30:28,568 [main] INFO client.AHSProxy - Connecting to Application History server at ip-192-168-180-228.ca-central-1.compute.internal/192.168.180.228:10200
2018-02-08 06:30:28,703 [main] INFO util.ExitUtil - Exiting with status 0
2018-02-08 06:30:29,462 - call returned (0, '2018-02-08 06:30:28,469 [main] WARN shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.\n2018-02-08 06:30:28,476 [main] INFO client.RMProxy - Connecting to ResourceManager at ip-192-168-180-228.ca-central-1.compute.internal/192.168.180.228:8050\n2018-02-08 06:30:28,568 [main] INFO client.AHSProxy - Connecting to Application History server at ip-192-168-180-228.ca-central-1.compute.internal/192.168.180.228:10200\n2018-02-08 06:30:28,703 [main] INFO util.ExitUtil - Exiting with status 0', '')
2018-02-08 06:30:29,462 - Stopped llap0 application on Slider successfully
2018-02-08 06:30:29,463 - Execute[('slider', 'destroy', u'llap0', '--force')] {'ignore_failures': True, 'user': 'hive', 'timeout': 30}
Command failed after 1 tries
... View more
02-06-2018
11:39 AM
While starting hive llap using ambari it fails. Below is the logs for respective application - 2018-02-06 11:18:07,935 [main] INFO impl.AMRMClientImpl - Waiting for application to be successfully unregistered.
2018-02-06 11:18:08,038 [main] INFO appmaster.SliderAppMaster - Exiting AM; final exit code = 0
2018-02-06 11:18:08,039 [main] INFO util.ExitUtil - Exiting with status 0
2018-02-06 11:18:08,039 [Shutdown] INFO mortbay.log - Shutdown hook executing
2018-02-06 11:18:08,039 [Shutdown] INFO mortbay.log - Graceful shutdown SslSelectChannelConnector@0.0.0.0:39816
2018-02-06 11:18:08,043 [Shutdown] INFO mortbay.log - Graceful shutdown SslSelectChannelConnector@0.0.0.0:41802
2018-02-06 11:18:08,045 [Shutdown] INFO mortbay.log - Graceful shutdown org.mortbay.jetty.servlet.Context@37b8fa89{/,null}
2018-02-06 11:18:08,047 [pool-1-thread-1] INFO mortbay.log - Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:41980
2018-02-06 11:18:09,046 [Shutdown] INFO mortbay.log - Stopped SslSelectChannelConnector@0.0.0.0:39816
2018-02-06 11:18:09,046 [Shutdown] INFO mortbay.log - Stopped SslSelectChannelConnector@0.0.0.0:41802
2018-02-06 11:18:09,146 [Shutdown] INFO mortbay.log - Shutdown hook complete
2018-02-06 11:18:09,150 [pool-1-thread-1] INFO ipc.Server - Stopping server on 36834
2018-02-06 11:18:09,150 [IPC Server listener on 36834] INFO ipc.Server - Stopping IPC Server listener on 36834
2018-02-06 11:18:09,150 [pool-1-thread-1] INFO impl.NMClientAsyncImpl - NM Client is being stopped.
2018-02-06 11:18:09,150 [pool-1-thread-1] INFO impl.NMClientAsyncImpl - Waiting for eventDispatcherThread to be interrupted.
2018-02-06 11:18:09,150 [IPC Server Responder] INFO ipc.Server - Stopping IPC Server Responder
2018-02-06 11:18:09,150 [pool-1-thread-1] INFO impl.NMClientAsyncImpl - eventDispatcherThread exited.
2018-02-06 11:18:09,150 [pool-1-thread-1] INFO impl.NMClientAsyncImpl - Stopping NM client.
2018-02-06 11:18:09,151 [pool-1-thread-1] INFO impl.NMClientImpl - Clean up running containers on stop.
2018-02-06 11:18:09,151 [pool-1-thread-1] INFO impl.NMClientImpl - Stopping container_e31_1517907648176_0016_01_000002
2018-02-06 11:18:09,151 [pool-1-thread-1] INFO impl.NMClientImpl - ok, stopContainerInternal.. container_e31_1517907648176_0016_01_000002
2018-02-06 11:18:09,152 [pool-1-thread-1] INFO impl.ContainerManagementProtocolProxy - Opening proxy : ip-192-168-180-102.ca-central-1.compute.internal:45454
2018-02-06 11:18:09,262 [pool-1-thread-1] INFO impl.NMClientImpl - Running containers cleaned up. Stopping NM proxies.
2018-02-06 11:18:09,262 [pool-1-thread-1] INFO impl.NMClientImpl - Stopped all proxies.
2018-02-06 11:18:09,262 [pool-1-thread-1] INFO impl.NMClientAsyncImpl - NMClient stopped.
2018-02-06 11:18:09,263 [AmExecutor-005] INFO actions.QueueService - QueueService processor terminated
2018-02-06 11:18:09,263 [AmExecutor-006] WARN actions.ActionStopQueue - STOP
2018-02-06 11:18:09,263 [AmExecutor-006] INFO actions.QueueExecutor - Queue Executor run() stopped
2018-02-06 11:18:09,263 [AMRM Callback Handler Thread] INFO impl.AMRMClientAsyncImpl - Interrupted while waiting for queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl$CallbackHandlerThread.run(AMRMClientAsyncImpl.java:276)
Attaching Ambari operation startup logs and application logs here. logszip.zip Any idea on how to debug.
... View more
Labels:
- Labels:
-
Apache Hive
-
Cloudera DataFlow (CDF)
02-04-2018
01:01 PM
@rguruvannagari Hi, What should be default value of "<keystore set with ranger.truststore.file>" ? since i haven't set any keystore/trustore for ranger.
... View more
02-02-2018
07:00 PM
Cluster: HDP2.5.3 I setup new cluster and enabled kerberos. Also enabled knox ranger plugin and tried test connection which fails with below error - 2018-02-02 18:55:53,821 [timed-executor-pool-0] ERROR org.apache.ranger.plugin.util.PasswordUtils (PasswordUtils.java:127) - Unable to decrypt password due to error
javax.crypto.IllegalBlockSizeException: Input length must be multiple of 8 when decrypting with padded cipher
at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:936)
at com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:847)
at com.sun.crypto.provider.PBES1Core.doFinal(PBES1Core.java:416)
at com.sun.crypto.provider.PBEWithMD5AndDESCipher.engineDoFinal(PBEWithMD5AndDESCipher.java:316)
at javax.crypto.Cipher.doFinal(Cipher.java:2165)
at org.apache.ranger.plugin.util.PasswordUtils.decryptPassword(PasswordUtils.java:112)
at org.apache.ranger.services.knox.client.KnoxClient.getTopologyList(KnoxClient.java:79)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:397)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:394)
at org.apache.ranger.services.knox.client.KnoxClient.timedTask(KnoxClient.java:423)
at org.apache.ranger.services.knox.client.KnoxClient.getKnoxResources(KnoxClient.java:402)
at org.apache.ranger.services.knox.client.KnoxClient.connectionTest(KnoxClient.java:311)
at org.apache.ranger.services.knox.client.KnoxResourceMgr.validateConfig(KnoxResourceMgr.java:43)
at org.apache.ranger.services.knox.RangerServiceKnox.validateConfig(RangerServiceKnox.java:56)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:560)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:547)
at org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:508)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-02-02 18:55:53,822 [timed-executor-pool-0] INFO apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:81) - Password decryption failed; trying knox connection with received password string
2018-02-02 18:55:53,906 [timed-executor-pool-0] ERROR apache.ranger.services.knox.client.KnoxClient (KnoxClient.java:158) - Exception on REST call to KnoxUrl : https://ip-10-0-1-157.ec2.internal:8443/gateway/admin/api/v1/topologies.
com.sun.jersey.api.client.ClientHandlerException: javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:131)
at com.sun.jersey.api.client.filter.HTTPBasicAuthFilter.handle(HTTPBasicAuthFilter.java:81)
at com.sun.jersey.api.client.Client.handle(Client.java:616)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:559)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:72)
at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:454)
at org.apache.ranger.services.knox.client.KnoxClient.getTopologyList(KnoxClient.java:98)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:397)
at org.apache.ranger.services.knox.client.KnoxClient$2.call(KnoxClient.java:394)
at org.apache.ranger.services.knox.client.KnoxClient.timedTask(KnoxClient.java:423)
at org.apache.ranger.services.knox.client.KnoxClient.getKnoxResources(KnoxClient.java:402)
at org.apache.ranger.services.knox.client.KnoxClient.connectionTest(KnoxClient.java:311)
at org.apache.ranger.services.knox.client.KnoxResourceMgr.validateConfig(KnoxResourceMgr.java:43)
at org.apache.ranger.services.knox.RangerServiceKnox.validateConfig(RangerServiceKnox.java:56)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:560)
at org.apache.ranger.biz.ServiceMgr$ValidateCallable.actualCall(ServiceMgr.java:547)
at org.apache.ranger.biz.ServiceMgr$TimedCallable.call(ServiceMgr.java:508)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLException: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1959)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1916)
at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1899)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1420)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:347)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:218)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:129)
... 20 more
Caused by: java.lang.RuntimeException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:91)
at sun.security.validator.Validator.getInstance(Validator.java:179)
at sun.security.ssl.X509TrustManagerImpl.getValidator(X509TrustManagerImpl.java:312)
at sun.security.ssl.X509TrustManagerImpl.checkTrustedInit(X509TrustManagerImpl.java:171)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:184)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1496)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1026)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:961)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1072)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
... 29 more
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:89)
... 41 more
2018-02-02 18:55:53,907 [timed-executor-pool-0] ERROR apache.ranger.services.knox.client.KnoxResourceMgr (KnoxResourceMgr.java:45) - <== KnoxResourceMgr.connectionTest Error: org.apache.ranger.plugin.client.HadoopException: Exception on REST call to KnoxUrl : https://ip-10-0-1-157.ec2.internal:8443/gateway/admin/api/v1/topologies.
2018-02-02 18:55:53,907 [timed-executor-pool-0] ERROR org.apache.ranger.services.knox.RangerServiceKnox (RangerServiceKnox.java:58) - <== RangerServiceKnox.validateConfig Error:org.apache.ranger.plugin.client.HadoopException: Exception on REST call to KnoxUrl : https://ip-10-0-1-157.ec2.internal:8443/gateway/admin/api/v1/topologies.
2018-02-02 18:55:53,907 [timed-executor-pool-0] ERROR org.apache.ranger.biz.ServiceMgr$TimedCallable (ServiceMgr.java:510) - TimedCallable.call: Error:org.apache.ranger.plugin.client.HadoopException: Exception on REST call to KnoxUrl : https://ip-10-0-1-157.ec2.internal:8443/gateway/admin/api/v1/topologies.
2018-02-02 18:55:53,908 [http-bio-6080-exec-10] ERROR org.apache.ranger.biz.ServiceMgr (ServiceMgr.java:188) - ==> ServiceMgr.validateConfig Error:org.apache.ranger.plugin.client.HadoopException: org.apache.ranger.plugin.client.HadoopException: Exception on REST call to KnoxUrl : https://ip-10-0-1-157.ec2.internal:8443/gateway/admin/api/v1/topologies.
Is this default behaviour ?
... View more
Labels:
- Labels:
-
Apache Knox
-
Apache Ranger
12-25-2017
01:31 PM
Hi Team, Is there any way to add new queue using Yarn rest api ? or Ambari configs.sh script ? I tried below link but no luck - https://community.hortonworks.com/questions/33578/api-to-manage-yarn-capacity-queue.html http://<ambari_server>:8080/api/v1/views/CAPACITY-SCHEDULER/versions/1.0.0/instances/AUTO_CS_INSTANCE/resources/scheduler/configuration
[root@node1 ~]# curl -u admin:admin -H "X-Requested-By:ambari" -iX PUT -d @cs.json http://192.168.56.111:8080/api/v1/clusters/hdp_mosaic
HTTP/1.1 100 Continue
HTTP/1.1 400 Bad Request
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=6a3wvwuvrhhu3e5258edqhjt;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain
Content-Length: 217
{
"status" : 400,
"message" : "org.apache.ambari.server.controller.spi.UnsupportedPropertyException: The properties [items] specified in the request or predicate are not supported for the resource type Cluster."
When i tried to enable Developer Tools for Ambari View->Yarn queue manager , I see below call - http://192.168.56.111:8080/api/v1/clusters/hdp_mosaic/configurations?type=capacity-scheduler&tag=version1514209101120
=======
{
"href" : "http://192.168.56.111:8080/api/v1/clusters/hdp_mosaic/configurations?type=capacity-scheduler&tag=version1514209101120",
"items" : [
{
"href" : "http://192.168.56.111:8080/api/v1/clusters/hdp_mosaic/configurations?type=capacity-scheduler&tag=version1514209101120",
"tag" : "version1514209101120",
"type" : "capacity-scheduler",
"version" : 5,
"Config" : {
"cluster_name" : "hdp_mosaic",
"stack_id" : "HDP-2.5"
},
"properties" : {
"yarn.scheduler.capacity.maximum-am-resource-percent" : "0.2",
"yarn.scheduler.capacity.maximum-applications" : "10000",
"yarn.scheduler.capacity.node-locality-delay" : "40",
"yarn.scheduler.capacity.queue-mappings-override.enable" : "false",
"yarn.scheduler.capacity.resource-calculator" : "org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator",
"yarn.scheduler.capacity.root.MosaiqQueue.acl_administer_queue" : "*",
"yarn.scheduler.capacity.root.MosaiqQueue.acl_submit_applications" : "*",
"yarn.scheduler.capacity.root.MosaiqQueue.capacity" : "90",
"yarn.scheduler.capacity.root.MosaiqQueue.maximum-capacity" : "90",
"yarn.scheduler.capacity.root.MosaiqQueue.minimum-user-limit-percent" : "100",
"yarn.scheduler.capacity.root.MosaiqQueue.ordering-policy" : "fifo",
"yarn.scheduler.capacity.root.MosaiqQueue.state" : "RUNNING",
"yarn.scheduler.capacity.root.MosaiqQueue.user-limit-factor" : "1",
"yarn.scheduler.capacity.root.accessible-node-labels" : "*",
"yarn.scheduler.capacity.root.acl_administer_queue" : "*",
"yarn.scheduler.capacity.root.capacity" : "100",
"yarn.scheduler.capacity.root.default.acl_submit_applications" : "*",
"yarn.scheduler.capacity.root.default.capacity" : "10",
"yarn.scheduler.capacity.root.default.maximum-capacity" : "100",
"yarn.scheduler.capacity.root.default.state" : "RUNNING",
"yarn.scheduler.capacity.root.default.user-limit-factor" : "1",
"yarn.scheduler.capacity.root.queues" : "MosaiqQueue,default"
}
}
]
}
... View more
- Tags:
- Hadoop Core
- YARN
Labels:
- Labels:
-
Apache YARN
- « Previous
- Next »