Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
19668 | 03-03-2020 08:12 AM | |
10668 | 02-28-2020 10:43 AM | |
3203 | 12-16-2019 12:59 PM | |
2550 | 11-12-2019 03:28 PM | |
4343 | 11-01-2019 09:01 AM |
08-22-2018
04:06 PM
@kedarw, Please open a new topic when you hit an issue similar to one that was resolved so we can reduce the confusion in our community posts. To your issue: How are you trying to install? http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5/RPMS/x86_64/ resolves to the latest release. If you want 5.15.0, please use: http://archive.cloudera.com/cm5/redhat/7/x86_64/cm/5.15.0/RPMS/x86_64/ If you can clarify more about what you are trying to do we will be able to assist.
... View more
08-22-2018
09:02 AM
@AKB, No. Your client will communicate with the NameNode itself over network. It does not need to authenticate to the host.
... View more
08-22-2018
08:07 AM
1 Kudo
@saikrishnamante, Ubuntu 16 has been supported for some time and is indeed supported for CDH 5.15.x: https://www.cloudera.com/documentation/enterprise/release-notes/topics/rn_consolidated_pcm.html#concept_jpd_hpz_jdb Also, it appears you fixed your own issue and the update was successful. The Warning is a warning and Cloudera will be updating signature keys to sha256 as of CM/CDH 6. Based on your output there are no errors on update so we expect the update to have succeeded. Verify with dpkg. I believe "apt-get update" is silent so you would not see any non-Warning or above messages.
... View more
08-06-2018
08:26 AM
1 Kudo
10002 is the HiveServer2 web UI port and it should be freed up when HiveServer2 shuts down. The netstat output shows that some client is connected to your HiveServer2 UI port. You could try to figure out what client that may be and what it is doing since it is a bit unusual that a connection to the HiveServer2 UI would last very long. Finding out what client is running on 87.92.98.123 may be a good thing. How are you stopping HiveServer2? If you are stopping it from Cloudera Manager and it fails to stop completely that indicates that HiveServer2 itself may not be able to shut down due to the client holding the port open. If that happens, you can try getting some jstack output by going to the HiveServer2 page in the Cloudera Manager UI and choosing Collect Stack Traces from the "Actions" menu. That should help explain why HiveServer2 isn't stopping Oh, and get a "ps aux" output against the PID found in the "netstat" output (22735 in the one you shared). We should verify that the process is indeed HiveServer2
... View more
08-06-2018
08:09 AM
@hadoopNoob, "Address already in use" means that HiveServer2 cannot bind to a port to listen because it is already in use by another server. you can run "netstat -nap |grep <port>" if your HiveServer2 uses the default. HiveServer2 uses the following ports by default: 10000 and 10002 Once you find out what process is using the port, you can stop or kill it (using the proper caution depending on what it is).
... View more
07-31-2018
12:13 PM
@krb, Make sure you have your /etc/krb5.conf configured correctly so that the zookeeper is sending its AS_REQ to the right KDC. If you have just changed from one KDC to another, the /etc/krb5.conf also needs to be updated. If you are not managing it with Cloudera Manager, it needs to be changed manually. Either way, you could do a tcpdump on port 88 and check if output requests are going to the right KDC if /etc/krb5.conf is configured properly for your new KDC.
... View more
07-25-2018
12:59 PM
@Riteshk, In the logs you provided, there are links to view the application logs. The error 143 return code does not mean there was a memory problem for sure. In order to tell what happened to the containers, you'll need to look at the logs more closely for that job.
... View more
07-25-2018
12:49 PM
@Huriye, I could be wrong, but now it appears to be breaking earlier (before a connection is attempted). Also, I note dthat you are using mysql-connector-java-8.0.11.jar which we do not currently support and we know will cause problems if you are able to get past this problem. What command line did you use to generate this error... it appears the error is saying there is a problem with the syntax of the query and the query is generated based on the options you supplied. Is it possible you have mismatched quotes around your password?
... View more
07-25-2018
12:17 PM
1 Kudo
@Anudas, Migration from the embedded database to mysql is a bit of an undertaking. Cloudera does not have public documentation explaining how to do this currently. If you are OK with migrating to an external postgres, consider doing so as it is much simpler and documented: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ag_migrate_postgres_db.html If you are using the embedded database for CM, it is likely you are also using the embedded database for your other roles like Hive Metastore, Sentry, Hue, Oozie etc. Each has its own method of migrating. For CM, the basics are to - export your deployment descriptor from your current database (via CM) - prepare the mysql db (as you have done) - import the deployment descriptor you exported into CM backed by your mysql db - remove the cloudera-manager-server-db-2 package Have you considered whether you need existing database information for other services? My suggestion is to identify what database information you can do without and start clean if possible. If not, this message board can help with the CM migration, but for other services you should consult other community boards to get the best tips. If you are still interested in trying the mysql migration, I can post some basic steps to try.
... View more
07-24-2018
12:15 PM
@yongie, The Permission Denied message indicates that your hadoop command is authenticating as the user "admin". As you can see, the user "admin" does not have previlige to write to the /user directory. In order to be able to have non-hdfs user write to that /user directory with the permissions as they are, that "admin" user will need to be a superuser. If you are not interested in having outher users as superusers, then the other option is to kinit as hdfs Basically, you need to create a user in your KDC with the name "hdfs" and with the userprincipalname hdfs@realm. See this page for details all that I mentioned above: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_s5_hdfs_principal.html Ben
... View more