Member since
02-23-2018
108
Posts
12
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1228 | 04-11-2019 12:32 AM | |
2032 | 03-28-2019 04:14 AM | |
1357 | 01-22-2019 12:20 AM | |
1629 | 09-25-2018 03:26 AM | |
3396 | 09-14-2018 04:58 AM |
03-28-2024
01:01 PM
3 Kudos
You might want to make sure /etc/cloudera-scm-server/db.properties is pointing to the right host, database and port
... View more
04-06-2022
11:30 AM
Hi @sarkarsm , can you please show me how you did the hive integration. I am relatively new so I am struggling with the configurations.
... View more
12-22-2020
02:29 PM
Hello @tjangid Thanks for your response. I was using Google Cloud to access to Cloudera port by using my external url + :port number. http://<external url>:7180 How to access #curl -u? Thanks,
... View more
06-23-2019
08:37 PM
1 Kudo
This looks like a case of edit logs getting reordered. As @bgooley noted, it is similar to HDFS-12369, where the OP_CLOSE is appearing after OP_DELETE causing the file to be absent when replaying the edits. The simplest fix, depending on if this is the only file instance of the reordered issue in your edit logs, would be to run the NameNode manually in an edits-recovery mode and "skip" this edit when it catches the error. The rest of the edits should apply normally and let you start up your NameNode. The recovery mode of NameNode is detailed at https://blog.cloudera.com/blog/2012/05/namenode-recovery-tools-for-the-hadoop-distributed-file-system/ If you're using CM, you'll need to use the NameNode's most recent generated configuration directory under /var/run/cloudera-scm-agent/process/ on the NameNode host as the HADOOP_CONF_DIR, while logged in as 'hdfs' user, before invoking the manual NameNode startup command. Once you've followed the prompts and the NameNode appears to start up, quit out/kill it to restart from Cloudera Manager normally. If you have a Support subscription, I'd recommend filing a case for this, as the process could get more involved depending on how widespread this issue is.
... View more
06-18-2019
06:36 PM
idle session timeout didn't help me. It seems Hue keeps Impala connection and keeps communicating with Impala coordinator when the query is in flight. The only way to close any open connection is to logoff from the HUE or Close the HUE session from http://<<Query-Coordinator>>:25000/sessions.
... View more
05-23-2019
01:56 AM
Thanks @bgooley ! That confirms that we are not totally off-track. Best regards, rdbb
... View more
04-25-2019
10:26 AM
This is a Hive Metastore health test that checks that a client can connect and perform basic operations. The operations include: (1) creating a database, (2) creating a table within that database with several types of columns and two partition keys, (3) creating a number of partitions, and (4) dropping both the table and the database. The database is created under the /user/hue/.cloudera_manager_hive_metastore_canary/<Hive Metastore role name>/ and is named "cloudera_manager_metastore_canary_test_db". The test returns "Bad" health if any of these operations fail. The test returns "Concerning" health if an unknown failure happens. The canary publishes a metric 'canary_duration' for the time it took for the canary to complete. Here is an example of a trigger, defined for the Hive Metastore role configuration group, that changes the health to "Bad" when the duration of the canary is longer than 5 sec: "IF (SELECT canary_duration WHERE entityName=$ROLENAME AND category = ROLE and last(canary_duration) > 5s) DO health:bad" A failure of this health test may indicate that the Hive Metastore is failing basic operations. Check the logs of the Hive Metastore and the Cloudera Manager Service Monitor for more details. This test can be enabled or disabled using the Hive Metastore Canary Health Test Hive Metastore monitoring setting. Ref: https://www.cloudera.com/documentation/enterprise/5-7-x/topics/cm_ht_hive_metastore_server.html#concept_p03_hon_yk
... View more
04-11-2019
12:32 AM
Hi @DataMike, You need to restart all afected component. If not this will run like nothing. Regards, Manu.
... View more
03-28-2019
08:53 AM
Happy to hear the issue is resolved! Just for completeness, did you delete /var/lib/cloudera-scm-agent/cm_guid on the cluster nodes?
... View more