Member since
05-22-2019
58
Posts
31
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1628 | 05-31-2018 07:49 AM | |
1676 | 04-05-2018 07:30 AM | |
2551 | 07-24-2017 03:08 PM | |
2630 | 02-22-2017 09:43 AM | |
3553 | 10-18-2016 02:48 PM |
05-31-2018
03:19 PM
Yes, that's required for PAM authentication to work. Happy to help.
... View more
05-31-2018
12:47 PM
Can you provide more information? Mask any sensitive info and provide 404 error details, it normally means topology is not deployed. Generally, HTTP 401 error you should get for authentication related issues.
... View more
05-31-2018
07:49 AM
1 Kudo
@Sparsh Singhal You need to configure your Authentication Provider in Knox topology to use KnoxPamRealm class for setting up PAM Authentication. Follow the link here. You can have a Ubuntu specific example of PAM configuration (/etc/pam.d/login) here. After successful configuration, you can use existing Unix users to authenticate via Knox.
... View more
04-05-2018
07:30 AM
1 Kudo
@Anurag Mishra LDAP authentication is configured by adding a "ShiroProvider" authentication provider to the cluster's topology file. When enabled, the Knox Gateway uses Apache Shiro ( org.apache.shiro.realm.ldap.JndiLdapRealm ) to authenticate users against the configured LDAP store. Please go through this document link 1. Shiro Provider is Knox side code and integrated. You need not worry about it's internal and change admin.xml (Admin topology) i.e. for Knox Administrators to proper LDAP/AD related values. For general usage, use default topology for services integration. 2. Read above documentation. 3. Read above documentation. 4. Make a group of users, you want to give access and whitelist them using ACL.
... View more
04-02-2018
01:22 PM
@Anurag Mishra Also you can explore Apache Ranger for much finer access and authorization control.
... View more
04-02-2018
10:59 AM
Anurag Mishra You can achieve that by configuring Service level authorization in Knox topology. Read about it here.
... View more
04-02-2018
07:02 AM
Anurag Mishra Yes, you figured out right, the users.ldif file contains Users for Knox inbuilt Demo LDAP service. In case, you want to add new users to a real world setup, you need to add users in your Active Directory or LDAP solution itself and integrate the same in Knox.
... View more
03-21-2018
09:31 AM
@Aishwarya Dixit I didn't realize you are talking about bypassing built-in authentication completely for Ambari and allow the User access Ambari coz it authenticated to the host where service is installed. In the above approach with Knox, you need to authenticate at least once with Knox using any local user credentials. Please see if setting up HeaderPreAuth Federation Provider can help in this regard. Don't have any other suggestion.
... View more
03-21-2018
06:37 AM
@Aishwarya Dixit You can use large value for below parameters in Knox-SSO topology to configure the time cookie/token remains valid, but I won't recommend setting it to a big number for security reasons. knoxsso.token.ttl knoxsso.cookie.max.age
... View more
03-20-2018
06:58 AM
1 Kudo
@Aishwarya Dixit You can achieve this using Knox as SSO mechanism for Ambari. Follow the documentation here for enabling Ambari for KnoxSSO and configure Form-based Identity Provider in Knox for SSO by following this link. Knox provides you a way to configure PAM-based authentication for unix-based systems. Follow the documentation here to configure the Knox topology accordingly. Use ShiroProvider with below config: <provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
<name>sessionTimeout</name>
<value>30</value>
</param>
<param>
<name>main.pamRealm</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxPamRealm</value>
</param>
<param>
<name>main.pamRealm.service</name>
<value>login</value>
</param>
<param>
<name>urls./**</name>
<value>authcBasic</value>
</param>
</provider>
... View more
01-03-2018
07:20 AM
@Vijay Mishra Can you remove authorization provider from default topology and see if it's due to Ranger policies preventing access?
... View more
01-03-2018
05:45 AM
Does your deafult.xml has just 2 lines as given in cat command output?
... View more
12-14-2017
08:19 AM
1 Kudo
@Nara g Please try adding a valid FQDN as host name in your /etc/hosts entry so that a cookie can be set for valid domain in your Browser. For example, host1.narag.com <ip_address_of_your_host1>. Make similar entries for your other hosts. Later try accessing your Ranger host with complete URL, for example https://ranger.narag.com:6080
... View more
12-14-2017
07:17 AM
3 Kudos
Modern Web-Browsers come with few inbuilt defenses for common web attacks but we need to enable our web applications to use them.
Recently support for many such HTTP response headers were added to Zeppelin to thwart common attacks like Cross-site scripting, ClickJacking, Man-in-the-Middle and SSL Downgrade attacks which Browsers can use to enable client-side security features. We need to configure the properties in zeppelin-site.xml listed below to enable the supported security headers. 1. The "zeppelin.server.xxss.protection" property needs to be updated in the zeppelin-site.xml in order to set X-XSS-PROTECTION header.
The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. When value is set to "1; mode=block", the browser will enable XSS filtering and prevent rendering of the page if an attack is detected. When value is set to "0", it turns off the protection against XSS attacks and disables XSS filtering by Web-Browsers. When value is set to "1" and a cross-site scripting attack is detected, the browser will sanitize the page (remove the unsafe parts). See example config below: <property>
<name>zeppelin.server.xxss.protection</name>
<value>1; mode=block</value>
</property>
2. The "zeppelin.server.xframe.options" property needs to be updated in the zeppelin-site.xml in order to set X-Frame-Options header. The X-Frame-Options HTTP response header can indicate browser to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites in a <frame>,<iframe> or <object>.
When value is set to "DENY", the web page cannot be displayed in a frame, regardless of the site attempting to do so. When value is set to "SAMEORIGIN", the web page can only be displayed in a frame on the same origin as the page itself. When value is set to "ALLOW-FROM <uri>", the web page can only be displayed in a frame on the specified origin i.e. given URI value. See example config below: <property> <name>zeppelin.server.xframe.options</name> <value>SAMEORIGIN</value>
</property>
3. The "zeppelin.server.strict.transport" property needs to be updated in the zeppelin-site.xml in order to enable HSTS.
Enabling HSTS Response Header prevents Man-in-the-middle attacks by automatically redirecting HTTP requests to HTTPS when Zeppelin Server is running on SSL. Even if web page contains any resource which gets served over HTTP or any HTTP links, it will automatically be redirected to HTTPS for the target domain. It also prevents MITM attack by not allowing User to override the invalid certificate message, when Attacker presents invalid SSL certificate to the User.
The REQUIRED "max-age" directive specifies the number of seconds, after the reception of the STS header field, during which the User Agent (Web Browsers) regards the host (from whom the message was received) as a Known HSTS Host. Please set the "max-age" value as per your requirement.
max-age=<expire-time> - The 'expire-time', time in seconds, that the browser should remember that a site is only to be accessed using HTTPS. max-age=<expire-time>; includeSubDomains - The 'includeSubDomains' flag is useful if all present and future subdomains will be HTTPS. Please be aware that this will block access to certain pages that can only be served over HTTP. max-age=<expire-time>; preload - The 'preload' flag indicates the site owner's consent to have their domain preloaded. The site owner still needs to then go and submit the domain to the HSTS preload list maintained by Google Chrome (and used by Firefox and Safari). See example config below: <property> <name>zeppelin.server.strict.transport</name> <value>max-age=31536000; includeSubDomains</value> </property>
... View more
- Find more articles tagged with:
- How-ToTutorial
- Security
- zeppelin
Labels:
09-14-2017
05:55 AM
This is certainly a much needed feature for Knox and is going to save ample time while configuring topologies for multiple services. Also as pointed out, it rules out manual error while editing the topology XML file.
... View more
08-18-2017
07:17 AM
@Uvaraj Seerangan Three things that can go wrong here. 1) The KnoxSSO expects a valid hostname with domain name, as the cookie will be set for that specific domain. So your hostname needs to be in format "{somehost}.{someorganisation}.{someTLD}", e.g. knoxhost.example.com. You can achieve this by making an extra entry in your /etc/hosts file at all the participating nodes in SSO e.g. Ambari, Ranger, Knox, etc. 2) You need to provide Knox SSL certificate as "SSO Public Key" value in Ranger Config. Easiest way to get it, is below command. Paste the content between "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" as "SSO Public Key" value. openssl s_client -connect knoxhost.example.com:8443</dev/null| sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p'> knoxssoRanger.crt 3) Increase the value of "knoxsso.token.ttl" property inside Advanced Knoxsso Topplogy sufficiently.
... View more
08-18-2017
07:04 AM
1 Kudo
@soumya swain Two things that can go wrong here. 1) The KnoxSSO expects a valid hostname with domain name, as the cookie will be set for that specific domain. So your hostname needs to be in format "{somehost}.{someorganisation}.{someTLD}", e.g. knoxhost.example.com. You can achieve this by making an extra entry in your /etc/hosts file at both the nodes. 2) You need to provide Knox SSL certificate as "Public Certificate pem" value when executing "ambari-server setup-sso" command. Easiest way to get it, is below command. Paste the content between "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" as "Public Certificate pem" value. openssl s_client -connect knoxhost.example.com:8443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > knoxssoAmbari.crt
... View more
08-14-2017
06:25 AM
@Warius Unnlauf If you want to change Knox port in order to resolve this, please change the property "gateway.port" in gateway-site.xml under Knox's conf directory. Sample config below: <property>
<name>gateway.port</name>
<value>8883</value>
</property>
... View more
08-02-2017
09:07 AM
We need another service entry with role "YARN" as well with same url. Also in URL scheme, "hdc" is not cluster-name but topology name.
... View more
07-24-2017
03:08 PM
2 Kudos
@Prasad T Please use the xml code below and create a topology file in /etc/knox/conf/topologies directory and replace the YARN_HOSTNAME and YARN_PORT with relevant values. If your newly created topology is named ui.xml, you can access the YARN UI using Web URL: https://KNOX_HOST:KNOX_PORT/gateway/ui/yarn/ <topology>
<gateway>
<provider>
<role>authentication</role>
<name>Anonymous</name>
<enabled>true</enabled>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>false</enabled>
</provider>
</gateway>
<service>
<role>YARN</role>
<url>http://<YARN_HOSTNAME>:<YARN_PORT></url>
</service>
<service>
<role>YARNUI</role>
<url>http://<YARN_HOSTNAME>:<YARN_PORT></url>
</service>
</topology>
... View more
05-23-2017
11:53 AM
Can you provide gateway.log and gateway-audit.log files from /var/log/knox directory for further debugging?
... View more
05-23-2017
11:52 AM
In case, Knox topology is pointing to Standby Name Node in HA scenario, you will get a 403 error and logs will point to something like this "message": "Operation category WRITE is not supported in state standby". In above case, OP is getting 404 error however the directory is getting created in HDFS.
... View more
03-11-2017
07:43 PM
Are you trying to use Active Directory or Open LDAP over SSL? Can you list out the steps you took to configure LDAPS and error you got?
... View more
02-25-2017
06:24 PM
1 Kudo
This is really nice feature to have given the rising security concerns recently. Nicely illustrated.
... View more
02-22-2017
09:43 AM
1 Kudo
I think the problem is your hostname which does not have FQDN. e.g. somehost.abc.com , Try putting /etc/hosts entries with FQDN for your "bigdata[0-9]" hosts.
KnoxSSO requires host TLD to set cookies for that domain.
... View more
02-22-2017
08:52 AM
Can you provide KnoxSSO topology from Knox configuration? Also try to authenticate using an User in Knox, as you are getting 401.
... View more
02-02-2017
06:27 PM
@Samet Karadag Can you try the below config? Mind the "& amp;" and autocorrection to &
... View more
02-02-2017
06:36 AM
@Samet Karadag Can you refer http://knox.apache.org/books/knox-0-11-0/user-guide.html#Hadoop+Configuration+Example and try adding below config and see if it helps.
<property>
<name>hadoop.http.authentication.type</name>
<value>org.apache.hadoop.security.authentication.server.JWTRedirectAuthenticationHandler</value>
</property>
... View more
02-01-2017
11:11 AM
1 Kudo
This may be happening coz the SSL certificate generated has your VM Hostname as CN. I will suggest making an hostname, IP address mapping entry in your remote machine's /etc/hosts file and access it using the hostname only.
Also you can export the Knox certificate using below command: $<JAVA_HOME>/bin/keytool -export -alias gateway-identity -rfc -file <cert.pem> -keystore /usr/hdp/current/knox-server/data/security/keystores/gateway.jks
and import the same in your remote host using below command: $<JAVA_HOME>/bin/keytool -import -alias knoxsso -keystore <JAVA_HOME>/jre/lib/security/cacerts -storepass changeit -file <cert.pem>
... View more
01-30-2017
12:04 PM
1 Kudo
@J. D. Bacolod You can use Unix users by configuring topology to use PAM based authentication. Refer http://knox.apache.org/books/knox-0-11-0/user-guide.html#PAM+based+Authentication
About Hive, the JDBC connection string is wrong. You don't have to specify database name i.e. microservice with Knox URL. Replace <PATH_TO_KNOX_KEYSTORE> with location of gateway.jks on your Knox Host and Try something like below:
beeline --silent=true -u "jdbc:hive2://localhost:8443/;ssl=true;sslTrustStore=<PATH_TO_KNOX_KEYSTORE>/gateway.jks;trustStorePassword=knoxsecret;transportMode=http;httpPath=gateway/default/hive" -d org.apache.hive.jdbc.HiveDriver -n guest -p guest-password -e "show databases;"
... View more