Member since
08-29-2013
79
Posts
37
Kudos Received
20
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7924 | 07-28-2015 12:39 PM | |
3160 | 07-22-2015 10:35 AM | |
4407 | 03-09-2015 06:45 AM | |
6229 | 10-28-2014 09:05 AM | |
15147 | 10-24-2014 11:49 AM |
06-09-2016
05:53 AM
Hello, while I cannot see your attached screenshots yet, this type of condition is usually caused when one (or both) of the roles - Service Monitor - Host Monitor are not running. These roles are responsible for gathering and displaying the state of everything else within the cluster, so when you see no charts or tables, the immediate subject of review should be these two roles. You explain that restarting the Cloudera Management Services sometimes resolves the issue temporarily; Host Monitor and Service Monitor are roles under the Cloudera Management Service so that would explain why the full restart clears the issue for a time. In the case of this smaller cluster you are running, it is likely Host Monitor and Service Monitor experienced an Out of Memory exception and have unexpectedly exited. Increasing heap configuration could help there. I will check again soon to see if I can view your screenshots and provide a more full answer.
... View more
02-17-2016
12:34 PM
1 Kudo
@Spyros P. - just bringing this to closure:
With the release of Cloudera Manager 5.5.0+, the feature you asked about is now available.
http://www.cloudera.com/documentation/enterprise/latest/topics/cm_rn_new_changed_features.html#concept_z5m_1pt_nt_unique_2
"Suppression of notifications - You can suppress the warnings that Cloudera Manager issues when a configuration value is outside the recommended range or is invalid. See Suppressing Configuration and Parameter Validation Warnings. - You can suppress health test warnings. See Suppressing Health Test Results. - Suppression can be useful if a warning does not apply to your deployment and you no longer want to see the notification. Suppressed warnings are still retained by Cloudera Manager, and you can unsuppress the warnings at any time."
Hope this helps,
--
Mark
... View more
07-28-2015
12:39 PM
2 Kudos
Hi Spyros, These are called Validations, and there is not currently a way to suppress them. We're working on a feature to let you acknowledge configuration issues such as these so that you can confirm you *know* about them, but perhaps are in a tiny demo cluster where they are not relevant to you. For now you'll just wish to ignore the validation warnings since it sounds like you are purposely just demoing a very small cluster and are aware of the configuration you've built. Regards, -- Mark
... View more
07-22-2015
10:35 AM
1 Kudo
Hello TS, MySQL is a perfectly great choice to be the metadata store for the entities you mention. CM doesn't have a "default RDBMS" per se, but certain installation methods can pull postgres in for you. It's perfectly fine to elect to use MySQL instead, and I'd encourage it (as well as guiding you toward our documentation which you can cite to your firm showing that it's fine to use [1]). RDBMS choice aside, the most important consideration is making sure that you have planned for and allocated sufficient space (or the ability to easily grow the available space) for the entities that will use the RDBMS. That's the absolute key. Some people love PostgresQL, others are very savvy with MySQL. Yet others may have a mandate to use Oracle 11g in an environment. Great - Cloudera Manager and CDH support any of these options! As for your questions: <Q1> How often CM talks to its backend metadata store? A1 - Cloudera Manager remains in constant contact with its metadata store. <Q2> Is it possible to quantify the amount of data traffic from the data store to the apps? A2 - I've not done this recently, but it would be an interesting exercise. Moreso than just the Cloudera Manager Server though, a few other of the Cloudera Management Services use an RDBMS (which may be considered for placement on the same instance as the one CM uses). <Q3> Should (or must) the database exist into the same server as CM?? (In my view YES, but I am looking for strong justifications!!!) A3 - The database instance is not required to be located on the same node as Cloudera Manager Server, but if that's what makes sense in your deployment then it's fine to do so. Opting for a colocated database can, in some cases, remove network latency from the picture. But, if you have access to a dedicated database admin team that can deploy MySQL and manage it (while also making sure it is backed by reliable and fast storage), then it can also make more sense to use that rather than a non-dedicated disk that's local to the Cloudera Manager Server. Your circumstances will dictate what's best. Refer to the document 'Storage Space Planning for Cloudera Manager', as it will also help you take note of the various services that use an RDBMS and some of the considerations you should take before deployment of same. Regards, -- Mark S. [1] - http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_ig_cm_requirements.html#cmig_topic_4_4_unique_1
... View more
03-09-2015
06:45 AM
1 Kudo
Hello, Thanks for the log snippet; from the reply, I see a couple of things that need comment. A) It looks like you attempted to perform an HTTP GET method against that /stop endpoint. This expects an HTTP POST method, not GET. Retry with POST in order for this command to succeed. [1] B) The endpoint you cite (/api/v9/cluster%201/commands/stop) is a high-level command to issue STOP to an entire named cluster. If you want to stop individual services, you must use /clusters/{clusterName}/services/{serviceName}/commands/stop and specify the exact service you wish to stop. [2] Example using curl, to stop a MapReduce service (meaning ALL ROLES get stopped): $ curl -X POST -u adminuser:password -v 'http://cluster.example.com:7180/api/v9/clusters/Cluster%201/services/MAPREDUCE-1/commands/stop' Example using curl, to stop only a specific role (JobTracker) under a MapReduce service: $ curl -X POST -u "adminuser:password" -i -v -H "content-type:application/json" -d '{"items" : ["mapreduce-JOBTRACKER-1234123412341234123412341234"] }' 'http://cluster.example.com:7180/api/v9/clusters/Cluster%201/services/MapReduce-1/roleCommands/stop' [1] - http://cloudera.github.io/cm_api/apidocs/v9/path__clusters_-clusterName-_commands_stop.html [2] - http://cloudera.github.io/cm_api/apidocs/v9/path__clusters_-clusterName-_services_-serviceName-_commands_stop.html
... View more
10-28-2014
09:05 AM
Hello Jy, Per our other thread, these monitors are continually gathering metrics regardless of level of activity in the cluster. Let's consider: Host Monitor (min. 10GiB) - gathers metrics around node-level entities of interest (characteristics like disk space usage, RAM, CPU, etc) -- Remember that these kinds of metrics are important and gathered/persisted regardless of the level of activity in the cluster. These metrics are still as useful in idle periods as they are in times of heavy load. Service Monitor (min. 10GiB) - gathers metrics around the configured roles and services in your cluster -- similarly, these kinds of metrics are important and gathered/persisted regardless of the level of activity in the cluster. -- These include metrics that would inform and power health checks like the HDFS, HBase, Zookeeper and Hive Canary functions to determine and notify early of any problems with same. Those are running constantly regardless of idle/use period, as they're always relevant. The Service Monitor also has responsibility for gathering metrics around YARN Applications being run, and Impala Queries issued. There is dedicated space aside from the above 10GiB to Service Monitor. By default, the YARN Application and Impala Query segments each use and require a minimum of 1GiB each. THESE would indeed vary or grow/recycle depending on the rate of activity within the cluster, compared to the core Host and Service Monitor functionality. That said, depending on how long you'd like to keep detailed metrics around YARN jobs or Impala Queries, do adjust that dedicated storage space upward if appropriate, and ensure it's located on a filesystem with adequate space to accommodate the size you specify. Regards, -- Mark S.
... View more
10-24-2014
11:49 AM
1 Kudo
Well, the data in "/var/lib/cloudera-[host|service]-monitor" is the sum total of the working data for these respective services. If you delete them, yes you can reclaim space, but the data in said locations will grow again up to the 10GB per service max until you shift its location. I don't advocate fully deleting the data for these services in any normal scenario (because it's quite a drastic option), but if you simply must reclaim the space then it's possible to choose to lose this data and still have your cluster's core functionality remain OK. Your Health statuses will be Unknown or Bad for a short time and you will lose all Charts in the UI while the timeseries store is rebuilt and repopulated (due to the fact that you are deleting ALL the historical metrics). If those conditions are OK in your small dev cluster, then you can make your choice accordingly, sure. Regards, -- Mark S.
... View more
10-24-2014
11:32 AM
1 Kudo
Hi Jy, Within Cloudera Manager 5.0 and above, The Service and Host Monitors use this on-disk (node local to where these respective processes run) storage space instead of an RDBMS. The Service and Host Monitors each require a minimum of 10GB for their storage. In Cloudera Manager 4.x, the configuration concept was to describe how many hours/days worth of metrics you wanted to keep, and the applications would self-purge to remain within that bound. As you can imagine though, 14 days worth of metrics for a cluster with 1000 hosts could require a dramatically different amount of space to retain than a cluster with 3 nodes! Space planning with this model is possible, but difficult. Now in the present with Cloudera Manager 5.x+, these two (Service Monitor and Host Monitor only) each use a dedicated amount of space. They will not exceed this amount of space but do have a minimum of 10GB each. You absolutely should adjust this amount of space upward depending on 1) The number of hosts in your cluster (which is relevant when thinking of what the Host Monitor does) as well as 2) The number of configured services and roles (which is relevant when thinking about what the Service Monitor does). Likewise, their default data directory locations in /var/lib/cloudera-[host|service]-monitor/ are just that - a default. Do feel free to move these to a location that's more appropriate for your environment. Please let me know if I can help clarify further. Regards, -- Mark S.
... View more
10-02-2014
09:49 AM
Hello Jan, If you use the Cloudera Manager "Path A" installer file (cloudera-manager-installer.bin) file on a node that already has a cloudera-manager.repo yum configuration - notably, a yum config that pointed at CM /5/ root instead of a specific version: "http://archive-primary.cloudera.com/cm5/redhat/6/x86_64/cm/5/RPMS/x86_64/" instead of "http://archive-primary.cloudera.com/cm5/redhat/6/x86_64/cm/5.1.2/RPMS/x86_64/" ... then you will end up pulling in the very latest Cloudera Manager version. This is probably what happened in your case. Previously, when CM 5.1.2 was the latest version, the URL for /5/ pointed at /5.1.2/. Once 5.1.3 became the newest, /5/ likewise became a symlink to it. For your purposes, and to install a version other than the very newest on a node that already has a Cloudera Manager repo configured in yum, make sure that before you run cloudera-manager-installer.bin make sure that you # [rm/mv] /etc/yum.repos.d/cloudera-manager.repo. By doing this, the cloudera-manager-installer.bin for CM 5.1.2 (http://archive-primary.cloudera.com/cm5/installer/5.1.2/cloudera-manager-installer.bin) will be able to lay down its own contained Cloudera Manager repo file which will point at: http://archive-primary.cloudera.com/cm5/installer/5.1.2/cloudera-manager-installer.bin ================ Alternatively, you could just try editing /etc/yum.repos.d/cloudera-manager.repo to change the baseurl "http://archive-primary.cloudera.com/cm5/redhat/6/x86_64/cm/5/RPMS/x86_64/" TO "http://archive-primary.cloudera.com/cm5/redhat/6/x86_64/cm/5.1.2/RPMS/x86_64/" Let me know if this lets you proceed. Alter instructions if you're using apt-get but the concept is the same. Or, switch to the Path B Installation method where you specifically configure your yum/apt-get repos and manually call yum install / apt-get install to specify exactly the files you want. Regards, -- Mark Schnegelberger
... View more
09-30-2014
07:10 AM
It sounds most likely something's just out of sequence. I could see this working fine (and have performed it numerous times) as long as sequence is maintained. So, Install CM on node1 Upgrade CM on node1 Run CM through all upgrade wizards which upgrades all hosts in cluster1 hearbeating to node1 CM Server. Stop all agents Stop CM Server Take [pg_dump|mysqldump] of CM database Copy [pg_dump|mysqldump] of CM database to node2 Reimport consistent database export into same version of RDBMS on node2. Ensure GRANT statements also apply. scp /etc/cloudera-scm-server/db.properties from node1 to node2 (assuming you've set appropriate GRANTs to the db import on node2 using same name / pw but allowing new hostname node2) OR alter/etc/cloudera-scm-server/db.properties on node2 to ensure correct user/pw/hostname are specified depending on how you've issued GRANTs / permissions. service cloudera-scm-server start && tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log Sanity-check that the server process comes up; once port :7180 is bound, attempt login on node2:7180. With UI up, configuration should all be in place for all clusters/services/roles, but no agents hearbeating Alter /etc/cloudera-scm-agent/config.ini on all managed hosts. Change the "server_host=node1" to "server_host=node2", save and apply across all nodes. Restart all agents: # for i in $(cat file_listing_all_managed_cluster_nodes); do ssh root@$i 'service cloudera-scm-agent restart'; done At this time all agents should be heartbeating to node2 which contains an exact clone of the configuration from node1. Given a consistent backup from this mythical node1, I don't see how a column can go missing unless some specific [pg_dump|mysqldump[ export options were applied, or some sort of import exclusion happened. What syntax did you use both on the export and the import side of things for the RDBMS? Also, which RDBMS are you using specifically? Thanks, -- Mark Schnegelberger
... View more