Member since
10-01-2018
802
Posts
143
Kudos Received
130
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3072 | 04-15-2022 09:39 AM | |
| 2475 | 03-16-2022 06:22 AM | |
| 6554 | 03-02-2022 09:44 PM | |
| 2907 | 03-02-2022 08:40 PM | |
| 1916 | 01-05-2022 07:01 AM |
11-29-2020
09:25 AM
@GregDol Please follow the below documentation and ensure to update the repository to use RedHat Satellite to fix the issue. https://docs.cloudera.com/HDPDocuments/Ambari-2.6.1.0/bk_ambari-installation/content/using_a_local_redHat_satellite_spacewalk_repo.html
... View more
11-29-2020
09:20 AM
1 Kudo
@bvishal Once you installed the software suit you don't need /var/www/ content so it's safe to remove and reuse for next component installation whatever you want. You are good to go. However your described method for moving the content in diff dirt and server from there using http server is also fine but not needed at this point as we have build the cluster already. This is just temporary to install things. For Ref: https://docs.cloudera.com/cdp-private-cloud-base/7.1.3/installation/topics/cdpdc-using-internally-hosted-remote-parcel-repo.html
... View more
11-27-2020
09:43 AM
@WayneWang This thread might help you. https://community.cloudera.com/t5/Support-Questions/Changing-rack-awareness-in-a-running-Hadoop-cluster-in/m-p/56473/highlight/true#M48693
... View more
11-27-2020
09:41 AM
@backlashhardik I am wondering if this is due to RHEL version difference. Since you are trying to add 6.9 hosts and CM is running on RHEL 7. Have to tried to install agent manually on host and then add it to CM?
... View more
11-24-2020
05:22 AM
@md186036 Yes that’s possible. Please follow below steps in doc. This is the rollback procedure for your use case. https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_cm_downgrade.html
... View more
11-23-2020
11:54 PM
@vijaypabothu You have to make sure that Firewall is disabled and port is open and listening then only this can work. Try to open the port 7180 and disable Firewall first and then Hard restart the agent. https://creodias.eu/-/how-to-open-ports-in-linux-
... View more
11-23-2020
11:39 PM
@vijaypabothu the full log file can give a better picture of this issue. However this can be the reflections of few configuration issues e.g: 1. The permission 755 is not available on /tmp/hive in HDFS. 2. The dir /tmp/hive/hive is full or running out of space then you have to move the file from there etc.
... View more
11-23-2020
11:29 PM
@mayank1996 You could try below: ./bin/hadoop namenode -recover Then initialisation ./bin/hdfs namenode -initializeSharedEdits I would encourage you to go through go through the Cloudera Blog https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-1/ and https://blog.cloudera.com/understanding-hdfs-recovery-processes-part-2/ for better understanding. The community discussion will also help you to resolve the issue. Check out this one as well http://www.augmentedintel.com/wordpress/index.php/recover-corrupt-hdfs-namenode/
... View more
11-23-2020
10:06 PM
@avengers Yes, only SSL port will work for SSL connection that's expected. So first rectify the ports and then see if the error comes again. Then testing the SSL connection using below or some similar command will help you to determine the issue. openssl s_client -verify 100 -showcerts -CAfile /etc/tls-certs/certificate.pem -connect <Impalahost>:25000
... View more
11-23-2020
10:00 PM
@avengers The parameter disables the cert check it's independent of self signed or CA signed. ssl_cert_ca_verify=False
... View more