Member since
10-23-2019
11
Posts
2
Kudos Received
0
Solutions
05-19-2020
09:46 AM
I'm trying to simply run all paragraphs with a specific noteId and get a valid response, with the following call:
curl -s -i --negotiate -u : \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
"https://localhost:9995/api/notebook/job/<noteid>"
Which based on the Zeppelin API documentation is the correct URL for the POST method for running all paragraphs of a note.
Also since this is a Kerberized environment I'm leveraging the
--negotiate -u :
so it uses the appropriate keytab for authentication.
Like other cases I've found online, I get an HTTP 302 error and the redirect is to /api/login
HTTP/1.1 302 Found
Date: Tuesday, May 19, 2020 12:35:03 PM EDT
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: authorization,Content-Type
Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, HEAD, DELETE
X-FRAME-OPTIONS: SAMEORIGIN
Strict-Transport-Security: max-age=631138519
X-XSS-Protection: 1
Set-Cookie: JSESSIONID=776ba082-58e9-47bd-ad55-b85768c50a75; Path=/; HttpOnly
Location: https://localhost:9995/api/login;JSESSIONID=776ba082-58e9-47bd-ad55-b85768c50a75
When trying to authenticate to /api/login just using Kerberos credentials as https://community.cloudera.com/t5/Support-Questions/Authentication-with-the-Zeppelin-REST-API/td-p/115170:
curl -i --negotiate -u : -X POST "https://localhost:9995/api/login"
I get the following response:
HTTP/1.1 500 Request failed.
Date: Tuesday, May 19, 2020 12:19:12 PM EDT
Access-Control-Allow-Credentials: true
Access-Control-Allow-Headers: authorization,Content-Type
Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, HEAD, DELETE
X-FRAME-OPTIONS: SAMEORIGIN
Strict-Transport-Security: max-age=631138519
X-XSS-Protection: 1
Set-Cookie: rememberMe=deleteMe; Path=/; Max-Age=0; Expires=Mon, 18-May-2020 16:19:12 GMT
Content-Type: text/html; charset=ISO-8859-1
Cache-Control: must-revalidate,no-cache,no-store
Content-Length: 305
Server: Jetty(9.2.15.v20160210)
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 500 Request failed.</title>
</head>
<body><h2>HTTP ERROR 500</h2>
<p>Problem accessing /api/login. Reason:
<pre> Request failed.</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
What am I doing wrong here? Is it even possible to access Zeppelin REST API with just Kerberos credentials? If so, how?
... View more
Labels:
01-29-2020
01:15 PM
Recently renewed certificates for an HDP cluster, which included the certificates for the ranger plugin (correctly having the same CN throughout the cluster)
The issue here is Atlas is set up with Two-way SSL, meaning that the Ranger admin node needs (and has currently) the root CA and intermediate CA certs in it's truststore as well as on the Atlas node (specified under property xasecure.policymgr.clientssl.truststore correctly)
Despite this, Ranger is still using the default java truststore to allow Atlas to authenticate.
I don't want to keep re-importing into the default java truststore when it gets overwritten during something like an OS patching.
How can I make Ranger point to the separate truststore we've created?
... View more
01-24-2020
07:08 AM
I'm in the process of renewing certificates in my HDP cluster and the last thing on the agenda is the Knox server.
I understand that the new node cert is to be imported into gateway.jks, the Knox keystore, with an alias "gateway-identity".
The problem is no one knows the password to the keystore. I thought it was the master secret, which to my understanding is contained in the master file, typically under /var/lib/knox/data/security/master.
So I copy the password string inside and try it with the keystore, and obviously it's not working.
I really want to avoid creating a new master secret password (what would be the exact detailed implications or steps required after such an action anyways?)
... View more
01-22-2020
07:37 AM
1 Kudo
@lyubomirangelo Thank you! Going through the wizard (ambari-server setup-security) fixed my issue. I just needed to point to the new key and certificate chain file, then restart.
... View more
01-21-2020
08:39 AM
The node certificates on my cluster are expiring soon so I have installed new ones, including on the node that has ambari-server. However, after restarting ambari server, ambari agent, and even the node itself, the old certificate still shows. I've tried also clearing cache and cookies for all time on my browser, but it doesn't work and the old cert even shows up on IE. I've tried the same methodology for other nodes in the cluster and it has worked, so why isn't it working for the ambari node? (ambari-server is set up through an https port)
... View more
- Tags:
- Ambari
- certificate
Labels:
01-17-2020
03:39 PM
1 Kudo
I just recently imported a certificate chain into the keystore that NiFi points to, on 3 NiFi nodes, call them node1, node2 and node3. The truststore.jks file so far has been unedited. Testing out the SSL handshakes between nodes, I get: SSL handshake has read 4537 bytes and written 495 bytes
...
return code: 0 (ok) Executed from node 2 requesting node 1 (using the same port configured in NiFi SSL settings in Ambari) Similarly other combinations also were successful, (node1 -> node2, node3 -> node1, etc.) However, when after the certificate import and then restarting NiFi, trying the NiFi UI, it shows that the cluster has been disconnected. Furthermore, it shows that the SSL handshakes are failing: Attempt to contact NiFi Node https://node2:port/nifi did not complete due to exception: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Attempt to contact NiFi Node https://node3:port/nifi did not complete due to exception: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Attempt to contact NiFi Node https://node1:port/nifi did not complete due to exception: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. What is going on here? Why isn't the SSL handshake working through NiFi?
... View more
Labels:
01-15-2020
08:14 AM
@EricL After obtaining root and intermediate CA certificates, and using the following command: sudo keytool -importcert -alias rootca -keystore cacerts -file /tmp/rootca.crt Where cacerts is the truststore file. I get the following message: Certificate already exists in keystore under alias <...> So rootca and intermediateca certs are already in my cacerts truststore. So why is keytool not allowing me to import the new certificate into the keystore? (note: I'm trying to import the server certificate into a different file than cacerts)
... View more
01-14-2020
01:50 PM
I'm in the process of renewing the certificates for each node in my Hadoop cluster. I obtained a certificate file for each of my nodes. But when running the following command, I get the error keytool error: java.lang.Exception: Failed to establish chain from reply” Command: sudo keytool -importcert -alias node1 -file node1.cer -keystore keystore.jks From what I've gathered this happens because I didn't load the root and intermediate CA certificates into the truststore yet. Looking into the truststore.jks file itself, I can see that I already have root and intermediate CA certificates that are still not expired for a long while. So they've already been loaded. So is it possible to use these existing root and intermediate CA certificates while importing my new Hadoop node certificate into the keystore? (Also, I've tried this command alteration but still got the same error:) sudo keytool -import -alias node1 -trustcacerts -storetype jceks -file node1.cer -keystore keystore.jks
... View more
Labels:
12-04-2019
06:18 PM
I'm trying to get a properly delimited output of all the files outputted under the
hdfs dfs -ls -R /
command in hdfs.
When I run the command I get inconsistent delimiters in the output. Is there a way to add a delimiter such as tab or comma to the output?
The aim is to pop the output into an excel file and have no overlap in the columns.
... View more
Labels:
10-24-2019
07:17 AM
[postgres@testvm1 ~]$ psql
psql (9.2.24)
Type "help" for help.
postgres=# @Scharan Confirmed postgres is running. Again, I don't seem to have a /var/lib/psql folder at all, despite postgres being installed... It might help to say that I've designated Hive Metastore and HiveServer2 to be on a node that's not the postgres/ambari server node (from what I've heard it should not matter..) UPDATE: I did find the pg_hba.conf file, just in a different location: It shows the following: # "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
... View more
10-23-2019
02:04 PM
2019-10-23 20:42:17,691 - Check db_connection_check was unsuccessful. Exit code: 1. Message: ERROR: Unable to connect to the DB. Please check DB connection properties.
org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 546, in <module>
CheckHost().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
method(env)
File "/var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py", line 207, in actionexecute
raise Fail(error_message)
resource_management.core.exceptions.Fail: Check db_connection_check was unsuccessful. Exit code: 1. Message: ERROR: Unable to connect to the DB. Please check DB connection properties.
org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. I followed the steps outlined when you select "Existing PostgreSQL database", namely: where I used: sudo yum install postgresql-jdbc to install the package, and then ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql-jdbc.jar And setup completed successfully. I'm making a test cluster with 3 machines on Azure, and cannot setup Hive.. Also I seem to not have a /var/lib/pgsql folder on my Ambari server like people reference in similar problems on the web, if that makes a difference.
... View more
Labels: