Member since
12-14-2015
89
Posts
7
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3197 | 08-20-2019 04:30 AM | |
3370 | 08-20-2019 12:29 AM | |
2213 | 10-18-2018 05:32 AM | |
3498 | 12-15-2016 10:52 AM | |
972 | 11-10-2016 09:21 AM |
10-10-2016
01:50 PM
Thank you!
... View more
10-07-2016
05:31 PM
This will work but is definately not the solution! The documentation does list all required rights, so that I am not required to give some ambari user all rights: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-security/content/configuring_ambari_for_non-root.html However the list is incorrect as mentioned in the inital post.
... View more
10-07-2016
01:43 PM
Okay, I will try this out. The only other workaround I can think about is getting certificates that include the IP-address. Can this work from you point of view? Or is the error created by Kerberos? Or to disable SSL for the ATS, well effectively for complete YARN and HDFS (which is kind of bad).
... View more
10-07-2016
01:03 PM
@Felix Albani Do you know how to disable ATS usage by Oozie as a workaround?
... View more
10-07-2016
12:47 PM
Hi @Felix Albani. Thanks for your answer. However, I recently updated to HDP 2.5 and the error still persists. edit: I just found the mentioned issue: https://issues.apache.org/jira/browse/OOZIE-2490 HDP 2.5 only includes Oozie 4.2.0.2.5, so the solution is to wait for the next release ...
... View more
10-07-2016
12:32 PM
1 Kudo
Hi Community, I am experiencing weird errors with Oozie, YARN and the Application timeline server. Running the Ambari Service checks for Oozie, Oozie is not able to get Delegation tokens for the ATS via the resourcemanager due to this error: in resourcemanager log: 2016-10-07 13:26:43,460 WARN security.DelegationTokenRenewer (DelegationTokenRenewer.java:handleDTRenewerAppSubmitEvent(908)) - Unable to add the application to the delegation token renewer.
java.io.IOException: Failed to renew token: Kind: TIMELINE_DELEGATION_TOKEN, Service: 10.40.11.42:8190, Ident: (owner=ambari-qa, renewer=yarn, realUser=oozie, issueDate=1475839603327, maxDate=1476444403327, sequenceNumber=122, masterKeyId=102)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:475)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$800(DelegationTokenRenewer.java:78)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:904)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:881)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: HTTPS hostname wrong: should be <10.40.11.42>
at sun.net.www.protocol.https.HttpsClient.checkURLSpoofing(HttpsClient.java:649)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:573)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:188)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:298)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.renewDelegationToken(DelegationTokenAuthenticator.java:216)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.renewDelegationToken(DelegationTokenAuthenticatedURL.java:414)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:405)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$2.run(TimelineClientImpl.java:387)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientRetryOpForOperateDelegationToken.run(TimelineClientImpl.java:699)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:185)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.operateDelegationToken(TimelineClientImpl.java:462)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.renewDelegationToken(TimelineClientImpl.java:409)
at org.apache.hadoop.yarn.security.client.TimelineDelegationTokenIdentifier$Renewer.renew(TimelineDelegationTokenIdentifier.java:81)
at org.apache.hadoop.security.token.Token.renew(Token.java:385)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:597)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:594)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:592)
at org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:461)
... 6 more I believe the interesting part is: Caused by: java.io.IOException: HTTPS hostname wrong: should be <10.40.11.42>
at sun.net.www.protocol.https.HttpsClient.checkURLSpoofing(HttpsClient.java:649) The config is: - HDP 2.5.0.0 and Ambari 2.4.0.1 - HTTPs is activated for both Hadoop (HDFS, YARN, ATS) and Oozie. The certificates include the hostname(s) of the server - Kerberos is activated - The cluster is multihomed, but this communication only happens internally - hadoop.security.token.service.use_ip is already set to false Do you have any idea, which config I could adjust to fix this? Or is it a bug?
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache YARN
10-07-2016
11:07 AM
Hi community, it seems like the docs are incorrect in regard to non-root configuration for Ambari: sudo-rights for "/usr/bin/ambari-python-wrap" are missing. I am running HDP 2.5.0 and Ambari 2.4.0.1 on RHEL 6.7 Otherwise my Ambari prompts: resource_management.core.exceptions.Fail: Execution of 'ambari-python-wrap /usr/bin/conf-select set-conf-dir --package hadoop --stack-version 2.5.0.0-1245 --conf-version 0' returned 1. Sorry, user ambari is not allowed to execute '/usr/bin/ambari-python-wrap /usr/bin/conf-select set-conf-dir --package hadoop --stack-version 2.5.0.0-1245 --conf-version 0' as root on fsdebsup0053.d-fs01.d-vwf.d-vwfs-ad.
Error: Error: Unable to run the custom hook script ['/usr/bin/python', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py', 'START', '/var/lib/ambari-agent/data/command-3158.json', '/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START', '/var/lib/ambari-agent/data/structured-out-3158.json', 'INFO', '/var/lib/ambari-agent/tmp']
... View more
Labels:
- Labels:
-
Apache Ambari
08-17-2016
06:42 PM
1 Kudo
Just to follow up on this one. You can recomission DataNodes and NodeManagers with the following calls. Replace the variables as needed. You may need to remove the line breaks. Node Manager curl -u admin:admin -H 'X-Requested-By:ambari' -X POST -d
'{
"RequestInfo":{
"context":"Recomissioning of host '$host': Recomission NodeManger via REST-API",
"operation_level":{
"level":"HOST",
"cluster_name":"'$clustername'",
"host_name":"'$host'"
},
"command":"DECOMMISSION",
"parameters":{
"slave_type":"NODEMANAGER",
"included_hosts":"'$host'"
}
},
"Requests/resource_filters":[
{
"service_name":"YARN",
"component_name":"RESOURCEMANAGER"
}
]
}' "http://$ambariHost:8080/api/v1/clusters/$clustername/requests"
DataNode: curl -u admin:admin -H 'X-Requested-By:ambari' -X POST -d
'{
"RequestInfo":{
"context":"Scipt-based Recomissioning of host '$host': Recomission DataNode via REST-API",
"operation_level":{
"level":"HOST",
"cluster_name":"'$clustername'",
"host_name":"'$host'"
},
"command":"DECOMMISSION",
"parameters":{
"slave_type":"DATANODE",
"included_hosts":"'$host'"
}
},
"Requests/resource_filters":[
{
"service_name":"HDFS",
"component_name":"NAMENODE"
}
]
}' "http://$ambariHost:8080/api/v1/clusters/$clustername/requests"
... View more
07-20-2016
01:56 PM
@Alejandro Fernandez @smohanty Has there been any development during the past 9 months? Is HA for Ambari (maybe also active-active) supported in Ambari 2.2.?
... View more
06-30-2016
10:46 AM
Oh well, I think I found the answer in the community: "If your cluster is kerberized you'll need one more account usually called "rangerlookup" to facilitate autocompletion of databases, tables etc, with a headless principal and a password (keytab unsupported). The docs talk about a rangerlookup account per service (hdfs, hbase, etc.) but I use only one." (Source: https://community.hortonworks.com/questions/21818/can-proxyuser-group-be-redefined-as-something-else.html) Other helpful entries: https://community.hortonworks.com/questions/12039/ranger-ui-for-hive-plug-in-auto-complete-of-tables.html https://community.hortonworks.com/questions/21145/autocompletion-of-names-not-working-in-ranger.html https://community.hortonworks.com/questions/432/permissions-necessary-for-the-user-required-to-con.html
... View more