Member since
01-05-2015
235
Posts
19
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1246 | 01-16-2019 04:11 PM | |
3362 | 01-16-2019 01:35 PM | |
1782 | 01-16-2019 07:36 AM | |
9583 | 11-19-2018 08:08 AM | |
2598 | 10-26-2018 04:17 PM |
03-20-2020
07:53 AM
The symbolic link is intentional but less important as its primary purpose is to ultimately prevent the need for changes in configured/operating software. The navencrypt-move tool creates a specific storage architecture in the encrypted "container" that is uses to identify monitored spaces for which the kernel module applies controls. If you are not using this structure the ACLs will not work properly unless you are using the Universal ACL which then applies little to no control over data access.
... View more
03-06-2020
10:52 AM
This can mean one of a few things but ultimately the error code you are seeing is being returned by your KDC.
Client not found in database means that the requested SPN cannot be found by the KDC. This most often occurs when both forward and reverse DNS are not properly configured in your environment. If you find the both forward and reverse DNS are in working order then you should review credentials on the KDC and ensure that the credentials we are trying to use exists for this host.
You can enable Kerberos debugging to get additional information from the JVM on the Kerberos interaction but generally speaking zookeeper uses one of two credentials. Either
HTTP/<fqdn>@realm
or
zookeeper/<fqdn>@realm.
... View more
12-19-2019
07:04 AM
Hello, Take a look at the Agent configuration on the reported host and make sure that use_tls is set to 1 and not 0. This error usually happens when Cloudera Manager expects TLS from the agent (Based on enabled options ) but the agent is continuing to send data using clear text. If the parameter is set to 1 please restart the agent to ensure that it is set properly. The agent performs multiple task some of these task are performed with data transmitted through the heartbeat others are handled by a pull method through the agent.
... View more
12-18-2019
02:07 PM
1 Kudo
Hi, We no longer refer to levels in our documentation but based on what is posted here it would appear as though are on Step 4 of our current documentation for manual certificate configuration. While we understand that you might not be able to provide the full raw contents of your certificate in this forum please ensure that the certificates you are attempting to use have the following two x509 Extended Key Usage attributes. These 2 EKUs must be present in order to use Client/Server Authentication which is the final step of deployment should you chose to go that far. These two EKUs are noted in our documentation. X509v3 Extended Key Usage: TLS Web Client Authentication TLS Web Server Authentication https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/how_to_configure_cm_tls.html#concept_gkg_xs3_lx https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/how_to_configure_cm_tls.html#topic_3 https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/how_to_configure_cm_tls.html Also please make sure that there are no leading or trailing spaces on any configuration parameters within the agent configuration file. If your certificate is non-single root, signed by an intermediate, you may have better success using the verify_cert_dir parameter so that all of your CA certificates including the root certificate are present. When using the verify_cert_dir parameter you must use c_rehash provided by openssl-perl.
... View more
12-18-2019
01:31 PM
Your recent update indicates that an upgrade was performed from 5.14 to 6.2. The 500 error indicates a problem in the system response. Have you properly followed the migration and upgrade documentation found here? https://docs.cloudera.com/documentation/enterprise/upgrade/topics/search_prepare_upgrade_cdh_6.html
... View more
08-07-2019
10:32 AM
Key Trustee Ships as a self contained product. While the front end of Key Trustee can and does support TLS 1.2 the postgreSQL version that presently ships with Key Trustee has no support for cipher enforcement. Even though it is capable of supporting TLS 1.2 other cipher specs cannot be forcefully disabled. If your goal is to ensure that all of the components on your system use only TLS 1.2 you should be advised that the security scanners will continue to report other TLS versions for the release of postgreSQL that currently ships with Key Trustee even with available work arounds. We also do not provide any UI based methods for altering the TLS configuration for the backing Key Trustee Database. You will need to reach out through normal support channels to obtain the work around we can provide. The KT service exports the following parameters along with others during startup to the environment. You should not attempt to update any part of the parcel content. Note where the python home is located. export KEYTRUSTEE_PYTHONHOME=$KEYTRUSTEE_SERVER_DIRNAME/lib/python2.6 export KEYTRUSTEE_PYTHONPATH=$KEYTRUSTEE_PYTHONHOME/lib:$KEYTRUSTEE_PYTHONHOME/site-packages export PYTHONPATH=$KEYTRUSTEE_PYTHONPATH:$KEYTRUSTEE_SERVER_DIRNAME export BIN=${KEYTRUSTEE_SERVER_DIRNAME}/bin export KEYTRUSTEE_PYTHON=${BIN}/python export CRYPTOGRAPHY_ALLOW_OPENSSL_100=True The parcel presently ships with pyOpenSSL-0.14. KEYTRUSTEE_PYTHONHOME/site-packages/pyOpenSSL-0.14-py2.6.egg-info Please show us the documentation you are referencing and also please be advised that there are various other components which also use python through out the platform.
... View more
03-01-2019
09:01 AM
Hello, Please review the Horton works community documentation. It covers rack awareness better than out documentation currently does and it is accurate. The behaviour you are describing is exactly how rack awareness works. When HDFS is made rack aware it will place 2 replicas within the same rack and a 3rd in a remote rack. That is because local nodes within the same rack are preferable both for the HDFS framework as well as most job schedulres. With a replication factor of 3 HDFS will not place a block on every rack.
... View more
02-28-2019
09:03 AM
Hello, Can you please tell us what documentation you have reviewed? Setting the rack locations of host is normally what is used to determine block placement. If HDFS for example is aware of your topology it should ensure that at least one replica is on another rack. https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cm_mc_specify_rack.html https://community.hortonworks.com/articles/43057/rack-awareness-1.html
... View more
01-24-2019
08:13 AM
Hello, According to the error information it appears as though you are attempting to use mySQL as a database backend. The stack trace reports the the mySQL JDBC driver cannot be located or loaded. HHH010003: JDBC Driver class not found: com.mysql.jdbc.Driver ... Caused by: java.lang.ClassNotFoundException: Could not load requested class : com.mysql.jdbc.Driver Please review the following documentation. https://www.cloudera.com/documentation/enterprise/6/6.1/topics/cm_ig_mysql.html#cmig_topic_5_5_3
... View more
01-24-2019
08:09 AM
Hello, I've reviewed the errors you have provided and this error appears to be coming from the underlying crypto library. According to RFC-5280 standards the CN and Description Fields each must not exceed 64 characters in length. This character limit is hard coded in the openSSL framework and cannot be altered without changing the source code within openSSL. The Subject Alt Name field has a much longer character limit. Unfortunately the log data you provided is truncated and it is difficult to tell what precisely was being performed and with what options when the failure occured. It would appear as though we use the following information to obtain the hostname for the CN field during the init process. hostname = socket.gethostname() It's fairly trival to create a short online python command to see what this returns. Can you please use something like this in order to perform your word count on the output? python -c "import socket; hostname = socket.gethostname(); print hostname;"
... View more
01-18-2019
06:46 AM
HI, It's not uncommon for our documentaion team to cross-link pages to avoid duplication within our documentation. However with that said I'll pass your feedback along to our documentation team.
... View more
01-16-2019
04:11 PM
Hello, The KMS service within the hadoop frame work is responsible for handling key material. The KMS is not responsible for encrypting or decrypting data. The KTS is not connected to access control over data. All encrypted data handling occurs within the DFS client framework. 1.) You will need to review and understand the concepts laid out in our documentation and upstream related to securing the KMS. Cloudera ships a secure by default ACL configuration. New keys are not automatically alotted any access controls. No users are authorized to access new keys which have undefined Acess Controls. The KMS ACL engine is designed to control key release and it is not in any way connected to the underlying HDFS Posix controls. The ACL engine indicately controls access to Encrypted data by controlling access to key material. https://www.cloudera.com/documentation/enterprise/5-16-x/topics/cdh_sg_kms_acl_config.html 2.) Your question here is moderately confusing. HDFS Encryption is Transparent to the DFS client. If a user is authorized to perform decrypt EEK operations they may view the encrypted data. Raw encrypted data is not normally visible to clients in the capacity I believe you are attempting to describe outside the context of the raw end point exposed to the supergroup users. 3.) You can access the raw data end point as a super user if you would like to verify that the data is encrypted. This information is documented publicly in both upstream and in our documentation. hdfs dfs -ls /.reserved/raw/ 4.) The Generate EEK operation is handled internally by the HDFS service user and is not normally exposed to operators. If you are a cloudera customer you should reach out to your account team for additional training and details.
... View more
01-16-2019
03:56 PM
1 Kudo
We have started the process of certifying certain OpenJDK versions for some releases of CDH. However full JDK feature convergance between the Oracle JDK and OpenJDK is not planned to occur until OpenJDK 11.
... View more
01-16-2019
01:35 PM
Hello, You are correct we have removed public access for archive-primary.cloudera.com. Customers and users should now access archive .cloudera.com which is part of a distributed network (CDN). This change was made effective Jan 9, 2019.
... View more
01-16-2019
07:36 AM
The page you've referenced cross-links to the one shown below. At the time of this writing we have only certified 1.8u181. Newer version may work without issue and generally support them but we have not certified any releases beyond what is shown on the page at this time. https://www.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_java_requirements.html#java_requirements
... View more
01-11-2019
08:54 AM
1 Kudo
Issues with the links in /etc/alternatives are normally caused by corruption of the alternatives subsystem in the Operating system. We do not own this component and we've filed request with Red Hat in the past in an effort to address it. The corruption can occur for a variety of reasons but the solution is usually fairly simple. The troublshooting steps you have taken however may make the situation worse or more specifically steps 3 and 4. The information in /etc typically impacts clients and not service daemons. If the upgrade completed without error the downgrade may have sweeping impacts that can break other services unless you have also rolled back all of the metadata services manually using backups. I'd recommend that you move back to the newer release of CDH. On each host where you are having the problem the typicaly steps to resolve this issue are as follows, assuming nothing else is broken. 1. Stop all roles on the host. 2. Stop the agent. 3. Ensure that the agent and supervisord are fully stopped. 4. Verify the output of the follow command and ensure that it only captures information related to cloudera. \ls -l /etc/alternatives/ | grep "\/opt\/cloudera" 5. Run the following shell script to cleanup and remove all alternatives related to cloudera parcels. $ \ls -l /etc/alternatives/ | grep "\/opt\/cloudera" | awk {'print $9'} | \ while read m; do if [[ -e /var/lib/alternatives/${m} ]] ;then echo "Removing ${m}"; \ rm -fv /var/lib/alternatives/${m} ; fi; rm -fv /etc/alternatives/${m}; done 6. Start the agent and review the log data in /var/log/cloudera-scm-agent/cloudera-scm-agent.log. You should see the agent attempt to create new alternatives for your parcels. 7. Verify where the alternative point and restart the roles on the host.
... View more
01-08-2019
08:39 AM
Hi, We are sorry you are still encountering this issue. Since we know that the problem is isolated to TLS and that the agent is reporting a null certificate chain you will need to isolate why the certificate chain is null. 1.) Ensure that the certificates are in a standard x509 format for the agent. 2.) Ensure that the truststores/keystores on the CM host are in JCEKS format and not pkcs12. 3.) Make sure that the cloudera-scm user can read the Private Key, Certificates, Truststores, and Password Files. 4.) Make sure that the certificate on the failing agent contains a proper CN and DNS Alt Name if Alt Names are in use. 5.) Are you using self-signed certificates or certificates signed by a CA? 6.) If all else fails you can obtain a tcpdump of attempted communication with the server. The port that we normally heartbeat to is 7182. You can then review the conversation between the server and agent to attempt to identify at what point the error is returned and potentially what error is being observed at the protocol level. You can identify and restrict your tcpdump information by tcp.stream.
... View more
12-17-2018
10:24 AM
Hello, Can you try commenting out this line in your nginx configuration? > proxy_set_header X-Forwarded-Proto https; The error that you are reporting now is being returned directly by Nginx. It means that something is trying to use plain text instead of TLS. You may also have to alter the way the server is configured. At the moment you have set Nginx to accept on TLS request only which may impact your ability to proxy to the backend since it does not use TLS. You may need to alter the server block, y ou may need to comment out: > ssl on Then alter the listen paramter on Niginx like so: > listen 8001 ssl; https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
... View more
12-14-2018
12:46 PM
Hi, This is quite unusual, the configuration normally doesn't change like this. Can you please login to Cloudera Manager then goto the follow location and check to see if it's been set to something unexepcted. CM -> Administration -> Settings -> Cloudera Manager Hostname Override If the value is blank, which is the default, it may indicate that the result of InetAddress.getLocalhost() is incorrect which can be caused by a number of things including entries in /etc/hosts. If you are certain that DNS works properly and that there are no erronous entries in /etc/hosts you can try setting the HostName Override. Then restart both Cloudera Manager and the Management services.
... View more
12-14-2018
09:43 AM
Hi Dennis, This thread is pretty old and marked solved. It might be a good idea to start a new thread instead of adding more to this one. The stack trace you provided appears to be a partial one, though I could be wrong. The stack trace we do have appears to point to a problem with TLS. Have you configured TLS already in CM or on the Agent? Are you attempting to use AutoTLS? If you are have you properly setup all of the host?
... View more
12-14-2018
09:25 AM
If your production environment will lack internet access I highly recommend that you work with members of your account team and/or infrastructure teams to setup an internal mirror/repository which you can point your installation to. Setting this up now will allow you to be more prepared for later deployments. You should review this set of documentation. More specifically the documentation related to setting up and using an internal parcel or package repository. Please be advised that the links below are for 6x and they may or may not match the release you intend to deploy. Custom Installations: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_custom_installation.html Internal Parcel: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_create_local_parcel_repo.html Internal Package: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_create_local_package_repo.html
... View more
12-14-2018
09:17 AM
Outside of the last time items below I am not seeing anything else that might be wrong with your configuration. Are you certain that TLS is not already enabled on Hue? You seem to have proxy_set_header Host twice in the first location path. Can you please remove this one shown below? > proxy_set_header Host $host; Also please uncomment the follow line under the static location. This alias is required if you deployed using parcels if you deployed using packages the path will be slightly different. > #alias /opt/cloudera/parcels/CDH/lib/hue/build/static/; If you used packages. # If Hue was installed with packaging install: ## alias /usr/lib/hue/build/static/;
... View more
12-13-2018
06:56 AM
Hello, While we do not provide support directly for Nginx reviewing the log data you have posted it would appear as though the Hue backend you are attempting to proxy to on Node 1 is not accepting incoming request. Are you sure that there are no firewalls between the proxy and the node yo uare connecting to? Are you sure that Hue is available at the address you have configured? 2018/12/11 20:04:18 [error] 19352#19352: *14 connect() failed (111: Connection refused) while connecting to upstream, client: <client_ip>, server: gravalytics.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://<node1_ip>:8888/favicon.ico", host: "gravalytics.com:8001", referrer: " https://gravalytics.com:8001/ "
... View more
12-11-2018
09:16 AM
Hello, In order to try to help you with this issue we will need to see more log data or you will need to review the log data. A 404 error is a generic error code which may originate from one of two places in your scenario. The 404 may originate directly from Nginx or it may originate from Hue. You will need to review the nginx logs and hue logs to determine what is returning the 404 error and for what resource. One way to make this easier is to remove one of your upstream servers from the server group so that it only proxies to one Hue instance while you investigate the 404 error condition.
... View more
12-11-2018
09:04 AM
Hello, Changes to the Yarn configuration should not have direct impacts on The Managment services or CM. The error you have reported is routing through the JDBC driver. Communication link failures where packets have not been recieved over a period of time are generally caused by a problem on the backing database system. com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask@20bf4111 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (5). Last acquisition attempt exception: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. We would suggest reviewing the database logging data to ensure that the connection is not being rejected for any particular reason. Certain version of Mysql track so called "bad" clients and store them in a table after a certain number of failed communication attempts.
... View more
12-05-2018
07:13 AM
1 Kudo
Hello, CDH 6 is classified as a Major Upgrade. CDH 5 is presently based on Hadoop 2.x where as CDH 6 has moved forward to Hadoop 3.x. There are many feature enhancements and changes across the platform related to enhanced capabilities, performance, and security. If you desire features only available in newer releases of Hadoop and it's components then CDH 6 may be the version for you though we do intend to maintain CDH 5 in accordance with our EOL. The overall life of CDH 5 is subject to change based on current external activities. If you are planning a new cluster deployment that does not have any data it may be a good idea to make this decision early. While there is an upgrade path from CDH 5 -> CDH 6 we have restricted those paths to certain releases though we expect to address additional releases as time moves forward.
... View more
12-04-2018
07:03 AM
Hello AKB, Unfortunately the answer to your question is, no. It will not be easier or better to rely soley on TLS termination on a reverse proxy. For most balancing/proxying algorithims, hardware, and software we recommend TCP Passthrough which means that all Hadoop services must still have TLS properly deployed as well as enabled. If you cluster is accessible by any external network we would advise that you properly deploy both Kerberos and TLS on your cluster.
... View more
11-20-2018
09:05 AM
Hello, Is this something you are still actively seeing? The 5.16 release has not yet been made available. If you are still having trouble obtaining the proper repo information please update this post so that we can work with our teams internally.
... View more
11-19-2018
08:08 AM
> java.security.cert.CertificateException: No subject alternative DNS name matching abc found. Hi, This error is important to note, as it would appear to mean that a certificate is now vailable to the client. The balancing algorithim really has no bearing on this particular issue and you must address this issue. By RFC standard if you use Subject Alt Names (SAN) and a CN the very first entry in the DNS Alt Name field must be the CN of the certificate. The error tells us that abc is not the first entry in DNS Alt Names (SAN). You need to review the CN and Subject/DNS Alt Names on your certificates in use by Hiveserver 2.
... View more
11-15-2018
08:32 AM
1 Kudo
@balusu If you are a licensed customer, using Key Trustee, please open a case immediately. While we would like to help you with this on the community, parts of the diagnostic process on Key Trustee Server and it's clients may expose sensitive information from your environment. DO NOT arbutrarily attempt to sync the KMS client data again without diagnostics performed by Cloudera! Syncing the client data again without working with us may result in unexpected data loss. There may be less risk if this is a POC, DEV, or a new environment but at this point in time that is not visible to us. When we are working with the Key Trustee KMS component and not the Key Server there are no Active or Passive delegations. All configured Key Management Server Proxies are used within the cluster.
... View more