Member since
09-19-2013
38
Posts
1
Kudos Received
0
Solutions
03-16-2018
09:02 AM
Thank you very much for reply such a great answers
... View more
03-16-2018
08:34 AM
Hi all I have a question if I it is possible install new ambari server and then add existing working HDP 2.6 cluster in it, in case if existing ambari is damaged or completely lost? If it is possible then will it change or reset HDP services existing configurations? Thank you
... View more
Labels:
02-16-2018
03:26 PM
Hi all I have hdp 2.6.3 with ranger security ssl enabled and plugins(hdfs, yarn and hive) enabled hive plugin not works here is hiveserver2.log: 2018-02-16 17:34:00,920 WARN [Thread-14]: client.RangerAdminRESTClient (RangerAdminRESTClient.java:getServicePoliciesIfUpdated(162)) - Error getting policies. secureMode=false, user=hive (auth:SIMPLE), response={"httpStatusCode":400,"statusCode":0}, serviceName=hive and /var/log/ranger/admin/xa_portal.log: 2018-02-16 08:27:59,754 [http-bio-6182-exec-28] ERROR org.apache.ranger.common.ServiceUtil (ServiceUtil.java:1359) - Requested Service not found. serviceName=hive I am almost 99% sure that all configurations have done correctly from ambari(because other plugins are working properly), also searched on google wondering what I have missed, but was not able to find useful information P.S. I have configured ranger.plugin.hive.policy.rest.ssl.config.file = /usr/hdp/current/hive-client/conf/conf.server/ranger-policymgr-ssl.xml which conatins all information about keystores and truststores, also I am sure that keystore file passwords are correct(checked many times) here is the file ranger-policymgr-ssl.xml <configuration>
<property>
<name>xasecure.policymgr.clientssl.keystore</name>
<value>/etc/security/key.jks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.keystore.credential.file</name>
<value>jceks://file/etc/ranger/hive/cred.jceks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.keystore.password</name>
<value>crypted</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.truststore</name>
<value>/etc/security/trust.jks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.truststore.credential.file</name>
<value>jceks://file/etc/ranger/hive/cred.jceks</value>
</property>
<property>
<name>xasecure.policymgr.clientssl.truststore.password</name>
<value>crypted</value>
</property>
do you have any idea what I miss and how can I fix this? Thank you
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
01-06-2018
07:50 AM
userSearchBase system usernames and passwords are correct, I copied them from working shiro.ini of zeppelin service
... View more
01-05-2018
08:40 AM
@mvaradkar thank you tryed but same 401 status in the logs btw after I enter url in the internet browser (h t t p s :// knox . ragaca . com : 8443/gateway/default/webhdfs/v1) there is 401 not only when I enter my real existing AD username and password but when I enter random symbols in the login prompt there are same "response status 401" in the gateway-audit.log every time
... View more
12-28-2017
05:39 PM
Hi all I am trying figure out knox gateway but I have problem when I access services like WEBHDFS this is error log from /var/log/knox/gateway-audit.log: 17/12/28 21:30:30 ||de5c4e70-c89c-487e-8fea-6260c6701efb|audit|IPADDR|WEBHDFS||||access|uri|/gateway/default/webhdfs/v1|unavailable|Request method: GET
17/12/28 21:30:30 ||de5c4e70-c89c-487e-8fea-6260c6701efb|audit|IPADDR|WEBHDFS||||access|uri|/gateway/default/webhdfs/v1|success|Response status: 401
this is my topology configuration: <topology>
<gateway>
<provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
<name>sessionTimeout</name>
<value>15</value>
</param>
<param>
<name>main.ldapRealm</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
</param>
<param>
<name>main.ldapContextFactory</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
</param>
<param>
<name>main.ldapRealm.contextFactory</name>
<value>$ldapContextFactory</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.url</name>
<value>ldap://ragaca.com:389</value>
</param>
<param>
<name>main.ldapRealm.authorizationEnabled</name>
<value>true</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
<value>simple</value>
</param>
<param>
<name>main.ldapRealm.userDnTemplate</name>
<value>sAMAccountName={0}</value>
</param>
<param>
<name>main.ldapRealm.userSearchAttributeName</name>
<value>sAMAccountName</value>
</param>
<param>
<name>main.ldapRealm.userObjectClass</name>
<value>person</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.systemUsername</name>
<value>CN=testUser,OU=testUsers,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.systemPassword</name>
<value>*********</value>
</param>
<param>
<name>main.ldapRealm.searchBase</name>
<value>OU=Domain Users & Groups,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.userSearchBase</name>
<value>Users,OU=Domain Users & Groups,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.userSearchScope</name>
<value>subtree</value>
</param>
<param>
<name>main.ldapRealm.groupSearchBase</name>
<value>OU=Groups,OU=Domain Users & Groups,DC=ragaca,DC=com</value>
</param>
<param>
<name>main.ldapRealm.groupObjectClass</name>
<value>group</value>
</param>
<param>
<name>main.ldapRealm.memberAttribute</name>
<value>member</value>
</param>
<param>
<name>urls./**</name>
<value>authcBasic</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
</provider>
<provider>
<role>authorization</role>
<name>AclsAuthz</name>
<enabled>true</enabled>
</provider>
</gateway>
<service>
<role>NAMENODE</role>
<url>hdfs://namenode1.ragaca.com:8020</url>
</service>
<service>
<role>JOBTRACKER</role>
<url>rpc://jt.ragaca.com:8050</url>
</service>
<service>
<role>WEBHDFS</role>
<url>http://namenode1.ragaca.com:50070/</url>
<url>http://namenode2.ragaca.com:50070/</url>
</service>
</topology>
I also have hadoop.proxyuser.knox.hosts=* and hadoop.proxyuser.knox.groups=* in the core-site of the HDFS configuration could anyone guess what am I missing Thank you very much and happy new year
... View more
Labels:
- Labels:
-
Apache Knox
10-16-2017
10:33 AM
Problem solved after update HDP stack from 2.6.1 to latest 2.6.2 version Thank you
... View more
10-13-2017
01:35 PM
Hi
I am trying enable ssl for ranger using this link I have java keystore and truststore files, I only use this 2 files, for other services they work properly, also checked password with java keytool and it is correct, tested several passwords for keystore file from simple to hard passwords but ranger-admin gives error in /var/log/ranger/admin/catalina.out during start: INFO: Initializing ProtocolHandler ["http-bio-6182"]
Oct 13, 2017 1:11:14 PM org.apache.coyote.AbstractProtocol init
SEVERE: Failed to initialize end point associated with ProtocolHandler ["http-bio-6182"]
java.io.IOException: Keystore was tampered with, or password was incorrect
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)
at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
at sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
at sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
at java.security.KeyStore.load(KeyStore.java:1445)
configuration are done from ambari then I checked ranger-admin-site.xml and: <property>
<name>ranger.service.https.attrib.keystore.pass</name>
<value>_</value>
</property>
here I cant see any password there is only " _ " symbol(but from ambari I set actual password, then I tried manually edit this xml file but after restart ranger service resets it and there is "_" anyway) this is permissions of the files(tried different permissions too): -rw------- 1 ranger ranger 1586 Oct 11 14:29 truststore.jks
-rw-r----- 1 ranger ranger 2872 Oct 12 14:03 keystore.jks
any idea? Thank you
... View more
Labels:
- Labels:
-
Apache Ranger
09-23-2017
08:19 AM
1 Kudo
place /** = authc in the end of [urls] section makes sense, also I made little changes in the ldapRealm.rolesByGroup(before it was incorrect syntax) and now everything is working properly place urls by correct order was a key, thank you very much
... View more
09-22-2017
12:37 PM
P.S. also there is some warnings in the /var/log/zeppelin/zeppelin-zeppelin-zeppelin.node.log WARN [2017-09-22 16:29:38,301] ({qtp760563749-56} JAXRSUtils.java[findTargetMethod]:499) - No operation matching request path "/api/login" is found, Relative Path: /, HTTP Method: GET, ContentType: */*, Accept: application/json,text/plain,*/*,. Please enable FINE/TRACE log level for more details.
WARN [2017-09-22 16:29:38,302] ({qtp760563749-56} WebApplicationExceptionMapper.java[toResponse]:73) - javax.ws.rs.ClientErrorException
at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:503)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:218)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.handleMessage(JAXR
etc ... -----------------------------
WARN [2017-09-22 16:29:47,865] ({qtp760563749-26} JAXRSUtils.java[findTargetMethod]:499) - No operation matching request path "/api/login;JSESSIONID=a26c09a0-e86d-4e56-97ae-ac3e8d45a057" is found, Relative Path: /, HTTP Method: GET, ContentType: */*, Accept: application/json,text/plain,*/*,. Please enable FINE/TRACE log level for more details.
WARN [2017-09-22 16:29:47,866] ({qtp760563749-26} WebApplicationExceptionMapper.java[toResponse]:73) - javax.ws.rs.ClientErrorException
at org.apache.cxf.jaxrs.utils.JAXRSUtils.findTargetMethod(JAXRSUtils.java:503)
at org.apache.cxf.jaxrs.interceptor.JAXRSInInterceptor.processRequest(JAXRSInInterceptor.java:218)
etc... ----------------------------- warnings occurs when user logins in the zeppelin UI maybe something wrong with path which starts with "api"? where is the path configs for zeppelin?
... View more