Member since
08-16-2016
642
Posts
131
Kudos Received
68
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3926 | 10-13-2017 09:42 PM | |
7341 | 09-14-2017 11:15 AM | |
3733 | 09-13-2017 10:35 PM | |
5919 | 09-13-2017 10:25 PM | |
6498 | 09-13-2017 10:05 PM |
07-14-2017
09:36 AM
1 Kudo
Service Monitor and Activity Monitor are the two heavy hitters. They both write to a TSDB on the local FS in a directory that you specify in the configs. I would split as I mentioned above: Host1: CM, DB, Amon, Rman Host2: Smon, Hmon, Event, Alert Service Monitor - resource usage dependent on the number of services being ran. It will grow as new nodes are added, as the new nodes will be running new services. Activity Monitor - Collection information on MR. Host monitor - Resource req. will depend on the number of hosts; will grow as new hosts are added. Reports Manager - Is all about the reports. This is usually light if only the pre-built reports are in place and generated. I have had to bump this up as the cluster usage and size grew. I haven't looked into it but it is likely because the reports run on a schedule even if they are not used. Event Server - Events can be generate out of any metrics or log entries. You can add more or more can be generated if they are triggered by changes to the system. Alert Publisher - Resource req. will depend on the number of alerts. This will grow with new hosts and services. More alerts can be added.
... View more
07-13-2017
07:56 PM
CM also uses the backend DB. So you could have CM, Amon, Rman,, and the DB on one host and Smon, Hmon, Event Server and Alert publisher on another.
... View more
07-13-2017
07:55 PM
1 Kudo
Activity Monitor and Reports Manager use a backend db to store their data. The rest use local directories. So these two are prime to be moved to a second host. You could move the others but you would have to manually migrate the data over.
... View more
07-13-2017
10:26 AM
1 Kudo
If I understand this correctly, you set up a PowerBI gateway on a different machines, you are using the Cloudera ODBC driver, and have set up a DSN on said machine. If yes, in that DSN did you enable SSL and set the PEM file location? Did you upload the PEM cert to that location on that host?
... View more
07-13-2017
10:10 AM
1 Kudo
This is an error from the Cloudera ODBC driver. This error typically means that you have set the location of the SSL cert or truststore but it can't find it at that given location or no location was set at all. You mentioned Kerberos but not SSL. Are you using SSL with Impala?
... View more
07-13-2017
10:08 AM
I don't know how you would do it but have you tried changing the HTTP header to use the type 'application/octet-stream'? "Expected mime type application/octet-stream but got text/html"
... View more
07-13-2017
08:03 AM
I don't know if I can say for certain without testing them out. I feel like /cm/deployment has always been there while the importClusterTemplate was added when they added blueprints. Both sound like they will take in a json object that describes a fully cluster. The deployment node will fail if any portion of it exists and rollback all changes. I don't know if the templates are more flexible or contain conditions.
... View more
07-11-2017
07:14 PM
The CM Api is available at that point. Try the /cm/importClusterTemplate node. You may need to create a empty cluster first with /clusters. To go through the wizard try : /api/v12/clusters/{clusterName}/commands/firstRun. https://cloudera.github.io/cm_api/apidocs/v12/
... View more
07-11-2017
12:51 PM
Get into the job and container logs via the Spark History UI. Try to get more context or a full stack trace.
... View more
07-11-2017
11:58 AM
1 Kudo
CDH does not come with the Kafka parcel. You need to add that seperately. Check the list of available parcels. If it is not present add the remote repo for CDH Kafka and check for new parcels. If it is available, download, distribute, and activate. From CM Add a Service Wizard: "Apache Kafka is publish-subscribe messaging rethought as a distributed commit log. Before adding this service, ensure that either the Kafka parcel is activated or the Kafka package is installed."
... View more