Member since
07-30-2013
509
Posts
113
Kudos Received
123
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3592 | 07-09-2018 11:54 AM | |
| 2911 | 05-03-2017 11:03 AM | |
| 7422 | 03-28-2017 02:27 PM | |
| 2904 | 03-27-2017 03:17 PM | |
| 2430 | 03-13-2017 04:30 PM |
05-14-2014
09:45 AM
Does your Management Service exist? Does the Host Monitor Role in the Management Service exist? Is it started? Is there an error in the Host Monitor logs? Normally the initial wizard in CM will guide you through setting up the Management service. Did you skip this wizard?
... View more
05-09-2014
04:20 PM
1 Kudo
It should be avilalable via the endpoint documented here: http://cloudera.github.io/cm_api/apidocs/v6/path__cm_service_roleConfigGroups_-roleConfigGroupName-_config.html Keep in mind that the management service doesn't live under a cluster, it has its own endpoint.
... View more
05-07-2014
09:32 AM
1 Kudo
When I tried this out (removing the WebHDFS configuration in Hue), then I was still able to navigate to the hue configuration page and set it to a legal value. I didn't get these stack traces. Do you have HA or security enabled? Trying to figure out how to reproduce your issue. You should be able to use the REST API to set this value. First, use your web browser to figure out the name of your NameNode or HTTPFS role (assuming your cluster name is "Cluster 1" and service name is "hdfs"): http://<host>:7180/api/v6/clusters/Cluster%201/services/hdfs/roles Then set the configuration in hue via command-line curl (or any other REST tool you like). Replace admin:admin with your username and password. Replace the value with the role name in the previous step. Replace host, clustername, and service name as needed in the URL. curl -X PUT -u "admin:admin" -i \
-H "content-type:application/json" \
-d '{ "items": [
{
"name": "hue_webhdfs",
"value": "HDFS-1-NAMENODE-8f4f1f5bac54d4060e695300fac67949"
}
] }' \
http://<host>:7180/api/v6/clusters/Cluster%201/services/hue/config
... View more
04-28-2014
02:02 PM
Hi Nishan, This is also fine, so long as you configure the database appropriately to handle the load from 3 CM servers and whatever else you point at it. When concerned about handling database load from multiple applications, I would expect that it's generally better to run a single instance on a host than to run several instances on the same host, since there would be lower overhead. Multiple instances on the same host can help with isolation, but not with handling more load. Monitoring databases tend to use a lot of resources if your cluster has a lot of activity, so be cautious about sharing monitoring databases. The database for CM doesn't generally get as much load. Depending on your cluster size / activity and the hardware / tuning you have for your Oracle, you may find it better to run multiple instances on different machines to ensure each daemon performs well. Thanks, Darren
... View more
04-28-2014
12:53 PM
Hi Nishan, Yes, this is supported. The database for CM does not need to be dedicated to CM. Just be sure to configure it appropriately for multiple CM servers to run on it. Thanks, Darren
... View more
04-22-2014
11:58 AM
1 Kudo
cssh is handy for accessing all machines, assuming your cluster security is set up in a convenient fashion. The goal of the restart is to fix symlinks. You should not restart the OS. I'm not 100% sure if restarting the cluster is necessary. I'd only do that if your problem doesn't go away after you've restarted all agents. To roll back to prior parcels, just activate a parcel of an older version. This is fine for minor version changes, such as 4.6.0 to 4.2.1. If you need to run a custom binary for pig, this will probably be tricky. The best thing to do would be to make your own custom parcel with the pig binaries swapped out. Let's hope that isn't needed, since that is both difficult and could expose you to unexpected compatibility issues since Cloudera only tests the combinations of binaries that we ship in a CDH release.
... View more
04-21-2014
03:37 PM
CM Agents manage other daemons, and are not restarted when you restart the cluster. I would try restarting the agent on all hosts, but this sounds like it's not an issue with Cloudera Manager, but rather with the underlying pig code you're running. Still worth doing the agent restart to be safe. Try posting in the pig forum: http://community.cloudera.com/t5/Data-Ingestion-Integration/bd-p/SqoopFlume
... View more
04-21-2014
02:14 PM
Can you include the failure message? Also, try restarting all of your CM agents (on each host, service cloudera-scm-agent restart). This is helpful whenever you have packages and parcels installed at the same time, then remove packages. Restarting the agents will populate your symlinks to point to the new parcel binaries. Thanks, Darren
... View more
04-18-2014
09:08 AM
Hi, Those errors won't cause any actual problems with your runtime. They will appear on perfectly functional NodeManagers. We need to find the real error. Is there some kind of fatal error at the end of stderr log? In the NodeManager role logs? Thanks, Darren
... View more
04-16-2014
10:21 AM
1 Kudo
Parcels are normally downloaded first to CM server, then distributed to each agent. Agents always download parcels from the CM server. It looks like your configuration is preventing this from working.
... View more