Member since
05-22-2019
58
Posts
31
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3221 | 05-31-2018 07:49 AM | |
2827 | 04-05-2018 07:30 AM | |
4003 | 07-24-2017 03:08 PM | |
4752 | 02-22-2017 09:43 AM | |
5178 | 10-18-2016 02:48 PM |
05-31-2018
03:19 PM
Yes, that's required for PAM authentication to work. Happy to help.
... View more
05-31-2018
12:47 PM
Can you provide more information? Mask any sensitive info and provide 404 error details, it normally means topology is not deployed. Generally, HTTP 401 error you should get for authentication related issues.
... View more
05-31-2018
07:49 AM
1 Kudo
@Sparsh Singhal You need to configure your Authentication Provider in Knox topology to use KnoxPamRealm class for setting up PAM Authentication. Follow the link here. You can have a Ubuntu specific example of PAM configuration (/etc/pam.d/login) here. After successful configuration, you can use existing Unix users to authenticate via Knox.
... View more
04-05-2018
07:30 AM
1 Kudo
@Anurag Mishra LDAP authentication is configured by adding a "ShiroProvider" authentication provider to the cluster's topology file. When enabled, the Knox Gateway uses Apache Shiro ( org.apache.shiro.realm.ldap.JndiLdapRealm ) to authenticate users against the configured LDAP store. Please go through this document link 1. Shiro Provider is Knox side code and integrated. You need not worry about it's internal and change admin.xml (Admin topology) i.e. for Knox Administrators to proper LDAP/AD related values. For general usage, use default topology for services integration. 2. Read above documentation. 3. Read above documentation. 4. Make a group of users, you want to give access and whitelist them using ACL.
... View more
01-03-2018
07:20 AM
@Vijay Mishra Can you remove authorization provider from default topology and see if it's due to Ranger policies preventing access?
... View more
01-03-2018
05:45 AM
Does your deafult.xml has just 2 lines as given in cat command output?
... View more
12-14-2017
07:17 AM
3 Kudos
Modern Web-Browsers come with few inbuilt defenses for common web attacks but we need to enable our web applications to use them.
Recently support for many such HTTP response headers were added to Zeppelin to thwart common attacks like Cross-site scripting, ClickJacking, Man-in-the-Middle and SSL Downgrade attacks which Browsers can use to enable client-side security features. We need to configure the properties in zeppelin-site.xml listed below to enable the supported security headers. 1. The "zeppelin.server.xxss.protection" property needs to be updated in the zeppelin-site.xml in order to set X-XSS-PROTECTION header.
The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. When value is set to "1; mode=block", the browser will enable XSS filtering and prevent rendering of the page if an attack is detected. When value is set to "0", it turns off the protection against XSS attacks and disables XSS filtering by Web-Browsers. When value is set to "1" and a cross-site scripting attack is detected, the browser will sanitize the page (remove the unsafe parts). See example config below: <property>
<name>zeppelin.server.xxss.protection</name>
<value>1; mode=block</value>
</property>
2. The "zeppelin.server.xframe.options" property needs to be updated in the zeppelin-site.xml in order to set X-Frame-Options header. The X-Frame-Options HTTP response header can indicate browser to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites in a <frame>,<iframe> or <object>.
When value is set to "DENY", the web page cannot be displayed in a frame, regardless of the site attempting to do so. When value is set to "SAMEORIGIN", the web page can only be displayed in a frame on the same origin as the page itself. When value is set to "ALLOW-FROM <uri>", the web page can only be displayed in a frame on the specified origin i.e. given URI value. See example config below: <property> <name>zeppelin.server.xframe.options</name> <value>SAMEORIGIN</value>
</property>
3. The "zeppelin.server.strict.transport" property needs to be updated in the zeppelin-site.xml in order to enable HSTS.
Enabling HSTS Response Header prevents Man-in-the-middle attacks by automatically redirecting HTTP requests to HTTPS when Zeppelin Server is running on SSL. Even if web page contains any resource which gets served over HTTP or any HTTP links, it will automatically be redirected to HTTPS for the target domain. It also prevents MITM attack by not allowing User to override the invalid certificate message, when Attacker presents invalid SSL certificate to the User.
The REQUIRED "max-age" directive specifies the number of seconds, after the reception of the STS header field, during which the User Agent (Web Browsers) regards the host (from whom the message was received) as a Known HSTS Host. Please set the "max-age" value as per your requirement.
max-age=<expire-time> - The 'expire-time', time in seconds, that the browser should remember that a site is only to be accessed using HTTPS. max-age=<expire-time>; includeSubDomains - The 'includeSubDomains' flag is useful if all present and future subdomains will be HTTPS. Please be aware that this will block access to certain pages that can only be served over HTTP. max-age=<expire-time>; preload - The 'preload' flag indicates the site owner's consent to have their domain preloaded. The site owner still needs to then go and submit the domain to the HSTS preload list maintained by Google Chrome (and used by Firefox and Safari). See example config below: <property> <name>zeppelin.server.strict.transport</name> <value>max-age=31536000; includeSubDomains</value> </property>
... View more
Labels:
09-14-2017
05:55 AM
This is certainly a much needed feature for Knox and is going to save ample time while configuring topologies for multiple services. Also as pointed out, it rules out manual error while editing the topology XML file.
... View more
08-18-2017
07:04 AM
1 Kudo
@soumya swain Two things that can go wrong here. 1) The KnoxSSO expects a valid hostname with domain name, as the cookie will be set for that specific domain. So your hostname needs to be in format "{somehost}.{someorganisation}.{someTLD}", e.g. knoxhost.example.com. You can achieve this by making an extra entry in your /etc/hosts file at both the nodes. 2) You need to provide Knox SSL certificate as "Public Certificate pem" value when executing "ambari-server setup-sso" command. Easiest way to get it, is below command. Paste the content between "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" as "Public Certificate pem" value. openssl s_client -connect knoxhost.example.com:8443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > knoxssoAmbari.crt
... View more
08-14-2017
06:25 AM
@Warius Unnlauf If you want to change Knox port in order to resolve this, please change the property "gateway.port" in gateway-site.xml under Knox's conf directory. Sample config below: <property>
<name>gateway.port</name>
<value>8883</value>
</property>
... View more