Member since
01-05-2015
235
Posts
19
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2052 | 01-16-2019 04:11 PM | |
5097 | 01-16-2019 01:35 PM | |
3072 | 01-16-2019 07:36 AM | |
21482 | 11-19-2018 08:08 AM | |
3296 | 10-26-2018 04:17 PM |
12-17-2018
10:24 AM
Hello, Can you try commenting out this line in your nginx configuration? > proxy_set_header X-Forwarded-Proto https; The error that you are reporting now is being returned directly by Nginx. It means that something is trying to use plain text instead of TLS. You may also have to alter the way the server is configured. At the moment you have set Nginx to accept on TLS request only which may impact your ability to proxy to the backend since it does not use TLS. You may need to alter the server block, you may need to comment out: > ssl on Then alter the listen paramter on Niginx like so: > listen 8001 ssl; https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
... View more
12-14-2018
09:43 AM
Hi Dennis, This thread is pretty old and marked solved. It might be a good idea to start a new thread instead of adding more to this one. The stack trace you provided appears to be a partial one, though I could be wrong. The stack trace we do have appears to point to a problem with TLS. Have you configured TLS already in CM or on the Agent? Are you attempting to use AutoTLS? If you are have you properly setup all of the host?
... View more
12-14-2018
09:17 AM
Outside of the last time items below I am not seeing anything else that might be wrong with your configuration. Are you certain that TLS is not already enabled on Hue? You seem to have proxy_set_header Host twice in the first location path. Can you please remove this one shown below? > proxy_set_header Host $host; Also please uncomment the follow line under the static location. This alias is required if you deployed using parcels if you deployed using packages the path will be slightly different. > #alias /opt/cloudera/parcels/CDH/lib/hue/build/static/; If you used packages. # If Hue was installed with packaging install: ## alias /usr/lib/hue/build/static/;
... View more
12-13-2018
06:56 AM
Hello, While we do not provide support directly for Nginx reviewing the log data you have posted it would appear as though the Hue backend you are attempting to proxy to on Node 1 is not accepting incoming request. Are you sure that there are no firewalls between the proxy and the node yo uare connecting to? Are you sure that Hue is available at the address you have configured? 2018/12/11 20:04:18 [error] 19352#19352: *14 connect() failed (111: Connection refused) while connecting to upstream, client: <client_ip>, server: gravalytics.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://<node1_ip>:8888/favicon.ico", host: "gravalytics.com:8001", referrer: "https://gravalytics.com:8001/"
... View more
12-11-2018
09:16 AM
Hello, In order to try to help you with this issue we will need to see more log data or you will need to review the log data. A 404 error is a generic error code which may originate from one of two places in your scenario. The 404 may originate directly from Nginx or it may originate from Hue. You will need to review the nginx logs and hue logs to determine what is returning the 404 error and for what resource. One way to make this easier is to remove one of your upstream servers from the server group so that it only proxies to one Hue instance while you investigate the 404 error condition.
... View more
12-11-2018
09:04 AM
Hello, Changes to the Yarn configuration should not have direct impacts on The Managment services or CM. The error you have reported is routing through the JDBC driver. Communication link failures where packets have not been recieved over a period of time are generally caused by a problem on the backing database system. com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@20bf4111 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (5). Last acquisition attempt exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. We would suggest reviewing the database logging data to ensure that the connection is not being rejected for any particular reason. Certain version of Mysql track so called "bad" clients and store them in a table after a certain number of failed communication attempts.
... View more
11-19-2018
08:08 AM
> java.security.cert.CertificateException: No subject alternative DNS name matching abc found. Hi, This error is important to note, as it would appear to mean that a certificate is now vailable to the client. The balancing algorithim really has no bearing on this particular issue and you must address this issue. By RFC standard if you use Subject Alt Names (SAN) and a CN the very first entry in the DNS Alt Name field must be the CN of the certificate. The error tells us that abc is not the first entry in DNS Alt Names (SAN). You need to review the CN and Subject/DNS Alt Names on your certificates in use by Hiveserver 2.
... View more
11-15-2018
08:32 AM
1 Kudo
@balusu If you are a licensed customer, using Key Trustee, please open a case immediately. While we would like to help you with this on the community, parts of the diagnostic process on Key Trustee Server and it's clients may expose sensitive information from your environment. DO NOT arbutrarily attempt to sync the KMS client data again without diagnostics performed by Cloudera! Syncing the client data again without working with us may result in unexpected data loss. There may be less risk if this is a POC, DEV, or a new environment but at this point in time that is not visible to us. When we are working with the Key Trustee KMS component and not the Key Server there are no Active or Passive delegations. All configured Key Management Server Proxies are used within the cluster.
... View more
11-15-2018
07:49 AM
1 Kudo
Hi, From the post it is not clear what type of KMS is deployed on this cluster. As this is a community post I will assume that you are using the JCEKS backed KMS. Please note that the ASF and Cloudera both suggest that this KMS implementation not be used in production deployments. https://hadoop.apache.org/docs/current/hadoop-kms/index.html > Caused by: java.lang.NullPointerException: No KeyVersion exists for key 'testTLS1' The error above occurs when a KMS instance makes a call to getKeyVersion. This call only occurs when the KMS is actually attempting to retrieve a key from the backend key datastore. When you see this error it quite literally means that the requested key cannot be found in the backing key datastore. If you have more than one KMS for example it likely means that the backing keystore on each KMS is not in sync. The KMS core does not in any way replicate the key material between individual instances at this time. n+1 capabilities are offered through other means in other providers that use external methods to synchronize this data. If however you believe you have performed a synchronization of the keystores on your own in a safe manner it is possible that one or more backing keystors are corrupt. The KMS will attempt to automatically create a new JCE Key Store when this occurs and export the keys it can. Unfortunately the KMS stores information in these keystores in a format that cannot be manipulate with Keytool. If the automated recovery fails then all data in the JCE keystore is lost and by proxy all keys as well as data are lost. For the core JCE backed KMS the logging information will appear in /var/log/hadoop-kms. In most cases in order to have a meaningful idea of what is wrong you will need to review the kms.log, kms-audit.log. and the catalina logs. A this time if you are on 6.x tomcat has been replaced by Jetty so the catalina logs will not exists. The behavior you are seeing suggest that you have more than one KMS instance and the request fails over to a working instance which allows the write to occur. Actual data encryption and decryption occurs on the DFS client. This means that in order for a read or write to occur the client must have a key and the information required to open the read or write pipe.
... View more
10-26-2018
04:17 PM
1 Kudo
Hi, Unfortunately service migrations from platform to platform are not exceptionally easy to complete. This type of migration is normally handled by our service teams. The process typically requires a number of steps including but not limited to understanding your active use cases and what services you have in your existing cluster. Please reach out to our sales team or your account team if you are an actively licensed customer for guidance.
... View more