Member since
05-22-2017
56
Posts
12
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
233 | 11-29-2021 01:52 AM | |
1988 | 12-06-2018 06:37 PM |
01-20-2022
03:35 AM
If SSL is enabled on the cluster, make sure you import Namenode cert or RootCA certificate in Ambari Truststore
... View more
12-01-2021
03:01 AM
Increase Solr Heap and restart the service, See if it fixes the issue
... View more
12-01-2021
02:55 AM
HI @mike_bronson7 , Please go through the link https://www.youtube.com/watch?v=GjswCzMaW9k Let us know if you have any concerns.
... View more
12-01-2021
02:50 AM
1 Kudo
Currently, there is no such authentication feature for Ranger and Atlas.
... View more
11-29-2021
01:52 AM
1 Kudo
Yes, you need to add one user at a time, You cannot add multiple users in a single JSON file.
... View more
08-20-2019
03:25 AM
1 Kudo
Hi, We don't share personal information like contacts, As you are facing Ambari Server issue and Agents issue is resolved, Please open a new question for Ambari Server, Also, check the Ambari Server logs you see some exceptions, attach those exceptions.
... View more
08-20-2019
01:50 AM
kill the process which is using port 8670 netstat -tulpn | grep 8670 kill -9 <process pid>
... View more
08-20-2019
01:44 AM
Use below cmds, PID means pid of the process. kill12998 kill 23758 Restart ambari agent
... View more
08-20-2019
12:25 AM
Hi @Manoj690 , Try to find the ambari agent pid and kill is manually. Below cmds will usefull # ps aux | grep main.py | grep -v grep # kill PID Once the process is killed then you can start the agent. Please accept the answer once issue resolved.
... View more
02-21-2019
05:05 AM
Hi @rajendra, Disabling Spnego for Ambari Infra will affect Atlas Startup from Ambari UI. Because Atlas runs curl cmd over Infra using --negotiate option and doesn't get expected output, startup fails. Do kinit with admin user and try to check. or try setting up [domain_realm] in krb5.conf of infra server
... View more
12-08-2018
06:19 AM
Please share the correct cmd to avoid confusion. Please use below cmd: keytool -v -importkeystore -srckeystore eneCert.pkcs12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype JKS Earlier you were passing incorrect deststoretype and no need of -alias this cmd. or try below. keytool -importkeystore -srckeystore [MY_FILE.p12] -srcstoretype pkcs12 -srcalias [ALIAS_SRC] -destkeystore [MY_KEYSTORE.jks] -deststoretype jks -deststorepass [PASSWORD_JKS] -destalias [ALIAS_DEST]
... View more
12-07-2018
11:08 AM
Which doc you followed to configure HDFS SSL? Regarding other error, You should see something like below when you hit the cmd. I java used by the keytool cmd. [root@alpha ~]# /usr/jdk64/jdk1.8.0_112/bin/keytool -genkey -keyalg RSA -alias rangerHdfsAgent -keystore ranger-plugin-keystore.jks -storepass myKeyFilePassword -validity 360 -keysize 2048
What is your first and last name?
[Unknown]:
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]:
What is the name of your City or Locality?
[Unknown]:
What is the name of your State or Province?
[Unknown]:
What is the two-letter country code for this unit?
[Unknown]:
Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
[no]:
... View more
12-06-2018
06:42 PM
Hi @Harish More , I see you have two issues here. As, you mentioned RM UI is not reachable, is it just RM UI is not accessible? Can u access HDFS UI? Do you see any exceptions in RM logs? Regarding policy sync. Once you enabled Ranger SSL, did you configured Ranger Plugin SSL for each components for which you have enabled plugin. Like example for HDFS: Configuring the Ranger HDFS Plugin for SSL
... View more
12-06-2018
06:37 PM
1 Kudo
Hi @Sajesh PP Currently, I see a hadoop_logs collection which logsearch uses is in down state and not recovering, due to which leader is not assigned to the collection. For fix this issue, you can drop the collection. If cluster is kerberized follow below step: kinit with ambari-infra keytab # curl -i -v --negotiate -u : "http://<SOLR_HOST>:8886/solr/admin/collections?action=DELETE&name=hadoop_logs" Restart LogSearch, which will create hadoop_logs. If the cluster is Non-Kerberos just normal url in a browser that will also work. Same method, can be used for other collections, if they are in DOWN state. You can access Solr UI -> Cloud -> check the status of collections
... View more
01-20-2018
06:53 AM
Please below article for Self Signed: Ranger Admin SSL Self Signed Ranger Admin CA Signed
... View more
01-19-2018
11:33 AM
I have hostname mentioned in SAN field, still, the service is not coming up.
... View more
01-19-2018
05:17 AM
1 Kudo
Issue with Ranger Admin SSL with Internal CA with SAN entries where CN is not FQDN of ranger host, SAN entry contains Ranger Admin host entry. Ranger Admin is stuck at. 2018-01-19 05:05:15,951 [alpha1.openstacklocal-startStop-1] DEBUG org.apache.ranger.biz.ServiceDBStore (ServiceDBStore.java:341) - <== ServiceDefDBStore.initStore()
2018-01-19 05:05:16,244 [alpha1.openstacklocal-startStop-1] DEBUG apache.ranger.security.web.authentication.RangerAuthenticationEntryPoint (RangerAuthenticationEntryPoint.java:66) - AjaxAwareAuthenticationEntryPoint(): constructor
2018-01-19 05:05:16,350 [alpha1.openstacklocal-startStop-1] INFO apache.ranger.security.web.filter.RangerCSRFPreventionFilter (RangerCSRFPreventionFilter.java:81) - Adding cross-site request forgery (CSRF) protection So, For Ranger Admin is it necessary to have CN as FQDN of Ranger admin host. SAN entry's are check first than CN entry right?
... View more
Labels:
01-19-2018
05:13 AM
Hello, is cluster kerberised? Do you want use Self signed or CA signed? In Non-Kerberos, Ranger SSL with CA-signed will have two way SSL. # while creating the client certs, make sure you provide extension as "usr_cert" and server cert as "server_cert", other wise 2 WAY SSL communication would fail
... View more
01-06-2018
07:57 AM
Is it possible to share the ldapsearch output for a specific user you're trying to access webhdfs. or use main.ldapRealm.userSearchBase=OU=Domain Users & Groups,DC=ragaca,DC=com and let me know if it works
... View more
01-06-2018
03:33 AM
can you correct the user search base seems to be incorrect. Refer : Using Apache Knox with ActiveDirector <param>
<name>main.ldapRealm.userSearchBase</name>
<value>Users,OU=Domain Users & Groups,DC=ragaca,DC=com</value>
</param>
... View more
11-10-2017
01:21 PM
Hi All, I have HDP 2.6.3, Amabri 2.6.0, SS 1.4.3.2.6.0.0-267. Fresh installation. Getting errors. HDFS Dashboard : ERROR 1012 (42M03): Table undefined. tableName=ACTIVITY.HDFS_USER_FILE_SUMMARY
MapReduce & Tez Dashboard : ERROR 1012 (42M03): Table undefined. tableName=ACTIVITY.JOB
YARN Dashboard : ERROR 1012 (42M03): Table undefined. tableName=ACTIVITY.YARN_APPLICATION
... View more
Labels:
- Labels:
-
Apache Spark
-
Hortonworks SmartSense
11-10-2017
12:46 PM
Hi, According to log: <name>knoxsso.redirect.whitelist.regex</name> <value>^https?:\/\/(localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*{replace0}lt;/value> make sure have correct knoxsso.redirect.whitelist.regex. Description: A semicolon-separated list of regex expressions. The incoming originalUrl must match one of the expressions in order for KnoxSSO to redirect to it after authentication. The default allows only relative paths and localhost with or without SSL for development usecases. This needs to be opened up for production use and actual participating applications. Note that cookie use is still constrained to redirect destinations in the same domain as the KnoxSSO service - regardless of the expressions specified here. NOTE: LDAP Authentication for Ambari must be enabled for Knox SSO. The LDAP server needs to be sync’d into the Ambari truststore. Ex: <param>
<name>knoxsso.redirect.whitelist.regex</name>
<value>^https?:\/\/(node1\.openstacklocal|172\.26\.113\.193|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*$</value>
</param>
... View more
11-08-2017
02:09 PM
1 Kudo
audit events are caused due to ambari alert for checking Resource Manager UI status. So, disabling Ambari Alert for Resource Manger UI will resolve the issue. or reduce the frequency of these audit events, increase the "Resource Manager web UI" alert to a higher value
... View more
11-05-2017
01:55 PM
Hi @GN_Exp Knox SSO providing WebSSO capabilities to the Hadoop cluster. For now, HDP supports SSO for Ambari, Ranger, Atas. Means when you login into Ambari using Knox credentials, you can login into Ranger UI an Atlas UI without credentials on Web Console. In Next major release we will be supporting All components for SSO, probably in HDP 3.0. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_security/content/knox_sso.html Please let me know if you require any more info on Knox SSO.
... View more
11-05-2017
07:15 AM
Hi @Pit Err, are you managing your admin topology by Ambari UI? In knox service repo, we configured the service url as "https://pravin1.openstacklocal:8443/gateway/admin/api/v1/topologies" Which is an admin topology explicitly used to list the topology names in Knox. And authentication for this usually is configured for LDAP, which you can verify from admin topology like below. ===
<provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
=== So authentication credentials should be set based on admin topology "authentication" module in admin.xml (or from ambari Advanced admin-topology). Also : URL doesn't seem correct to me. https://hdphost03.mydomain.local@MYDOMAIN.LOCAL/gateway/admin/api/v1/topologies, responseStatus: 401 Please check the knox.url * in the service repo . Also, are you using Ranger Acl for KNOX? One more thing, Ranger Test Connection is an additional feature to test. To confirm ranger acl are working properly for knox you need to check access logs of Ranger. If you want to use "rangerlookup" for Ranger KNOX plugin. You need to specify accordingly. Lookup is handled by the user which is configured in the Service Repository in Ranger UI, Also check that user has the policy to do the hive query. It has to be hadoop user for the user to get authorized. In secure cluster it has to be principal with password from kdc i.e. hive@EXAMPLE.COM or in your case rangerlookup Prepare Ranger Lookup
... View more
10-18-2017
07:42 AM
HDP 2.6.1 I have enabled Ranger Security for yarn policy using below steps: Configure YARN to use only Ranger ACLs (i.e ignore YARN ACLs) Ambari > YARN > Custom ranger-yarn-security > add below property and restart YARN ranger.add-yarn-authorization = false I have tested to scenarios: Scenario I: yarn.acl.enable = true
ranger.add-yarn-authorization = false
-->Only Ranger ACL are applied
Scenario II: yarn.acl.enable = false
ranger.add-yarn-authorization=false
-->Both YARN Acl & Ranger ACL are invalid
when we set yarn.acl.enable = false, yarn acl and ranger acl are invalid(lose efficiency).I don't know why.
... View more
Labels:
10-11-2017
06:13 PM
Hi @skothari, From where do we get -srcalias <src-alias> from Step 3 ?
... View more
10-05-2017
01:06 PM
2 Kudos
Cloudbreak contains mini KNOX which is not managed by Ambari. Below are the steps to replace Self Signed Certificate with CA Signed Certificates Step 1: Remove below two entries from /usr/hdp/current/knox-server/conf/gateway-site.xml and save it. <property>
<name>gateway.signing.keystore.name</name>
<value>signing.jks</value>
</property>
<property>
<name>gateway.signing.key.alias</name>
<value>signing-identity</value>
</property> Step 2: Take a backup of original configuration: [~]$ cd /usr/hdp/current/knox-server/data/security/keystores/
[~]$ mkdir backup
[~]$ mv __gateway-credentials.jceks gateway.jks backup/ Step 3: Create a keystore in PKCS12 format from your private key file, certificate, Intermediate certificate and root certificate [~]$ openssl pkcs12 -export -out corp_cert_chain.pfx -inkey <private-key>.key -in <cert.cer> -certfile <root_intermediate>.cer -certfile <root_ca>.cer Step 4: Regenerate Master Key. Use the same password for master key and keystore. # rm -rf /usr/hdp/current/knox-server/data/security/master
# ls -l /usr/hdp/current/knox-server/data/security/master
# /usr/hdp/current/knox-server/bin/knoxcli.sh create-master Step 5: Generate Knox keystore [~]$ cp corp_cert_chain.pfx /usr/hdp/current/knox-server/data/security/keystores/
[~]$ cd /usr/hdp/current/knox-server/data/security/keystores/
[~]$ keytool -importkeystore -srckeystore corp_cert_chain.pfx -srcstoretype pkcs12 -destkeystore
gateway.jks -deststoretype jks -srcstorepass <src-keystore-password> -deststorepass <knox-master-secret> -destkeypass <knox-master-secret> Step 6: Replace the alias of keystore keytool -changealias -alias "1" -destalias "gateway-identity" -keypass keypass -keystore gateway.jks-storepass storepass Step 7: Store the keystore password in jceks file [~]$ /usr/hdp/current/knox-server/bin/knoxcli.sh create-alias gateway-identity-passphrase
--value <knox-master-secret> Step 8: Restart Knox, you should see the below-highlighted lines in your knox logs [~]$ tail –f /var/log/knox/gateway.log
... View more
- Find more articles tagged with:
- certificate
- Cloudbreak
- How-ToTutorial
- Knox
- knox-gateway
- Security
- ssl
Labels: