Member since
07-30-2013
509
Posts
113
Kudos Received
123
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2990 | 07-09-2018 11:54 AM | |
2488 | 05-03-2017 11:03 AM | |
6092 | 03-28-2017 02:27 PM | |
2328 | 03-27-2017 03:17 PM | |
2038 | 03-13-2017 04:30 PM |
12-18-2013
12:49 PM
1 Kudo
It looks like you tried to put up a picture, but it isn't displaying. Did you try using the search box on the left? I'm guessing you are just not clicking in the right group / category on the left, or not using the search box.
... View more
12-18-2013
11:47 AM
1 Kudo
Hi BC, Click on HDFS, then on Configuration -> View and Edit, then look for the secondary name node heap. The easiest way to find it is to enter "heap" or maybe "secondary heap" into the search box on the left. You can also navigate the config groups and categories on the left to find it under Secondary NameNode -> Resource Management. Thanks, Darren
... View more
12-13-2013
11:31 AM
1 Kudo
Hi Pankaj, Recommission is only available in api version v2 and up. you have "/api/v1" in your URL, so it's not there. The api documentation usually says when an endpoint was introduced: http://cloudera.github.io/cm_api/apidocs/v6/index.html (API documentation also available in the menus in upper right of your CM server) If you already decommissioned it, it's a good idea to re-commission your HDFS datnode as well, since otherwise you'll have very uneven data node utilization. Thanks, Darren
... View more
12-11-2013
10:41 PM
1 Kudo
The test connection page won't let you proceed unless your connection is valid. Did you modify your service monitory database settings manually, after setting up the service? What do the log messages say for the Service Monitor? Click on your Management Service, click on Service Monitory, click on the Processes tab, and look for any interesting messages in stderr, stdout, and the role log.
... View more
12-11-2013
10:22 AM
Hi Pankaj, For your use-case, you want to use the host decommission command. This will move / re-replicate any data from that node to the rest of your cluster while stopping all roles, which will let you perform maintenance while the rest of the cluster is fully operational. In the python bindings, the class ClouderaManager has the method hosts_decommission which will do exactly what you want. Make sure to call the corresponding recommission command when you want it to re-join the cluster. See the tutorial for general usage of the python bindings, including how to interact with commands: http://cloudera.github.io/cm_api/ Thanks, Darren
... View more
12-09-2013
10:57 AM
1 Kudo
It sounds like you only re-deployed your HDFS client configuration, but you also need to re-deploy your MapReduce client configuration, since that has higher alternatives priority. It's usually best to use the cluster-wide command to deploy client configuration to make sure you don't miss something.
... View more
12-04-2013
09:39 PM
Hi Oleksiy, Can you share the exact error message (ie copy and paste it here)? There's a few error messages that can sound similar, so it's good to verify which one we're talking about. If you click on the Hosts tab, then run the Hosts inspector, it will generate a report that will list the software versions on each host, along with errors such as version mismatches. Please share this page with us. Thanks, Darren
... View more
12-03-2013
03:17 PM
1 Kudo
Click on your MapReduce service, go to the instances tab, and add a Gateway to the host where you want configs. Then deploy client configuration. In general, you can use gateway roles to deploy client configs to hosts. This works for services like Hive and HBase as well. Thanks, Darren
... View more
11-26-2013
01:55 PM
1 Kudo
Hi Dave, This is usually caused because you have set a bad safety valve for MapReduce or Yarn client enviornment, usually because you installed LZO and made a mistake when the safety valve for MR to pick up the parcel. Here's the docs for using the LZO parcel: http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/4.6.2/Cloudera-Manager-Installation-Guide/cmig_install_LZO_Compression.html The mistake people often make is forgetting to append to the existing value for HADOOP_CLASSPATH or any other variable. Since Hive uses MR client configs, when it sources haddop-env.sh it will have its classpath overwritten by your MR client env safety valve. So this is bad for client environment safety valves: HADOOP_CLASSPATH=/my/new/stuff and this is good: HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/my/new/stuff Thanks, Darren
... View more
11-15-2013
09:15 PM
Your restart command for agents is correct. Did your agents change host names / ip addresses as well? It sounds like your hosts page has two entries for each host, one with the new name, one with the old. In this case, you need to tell your agents to use the old host id. You don't want to use the new host id because you'll have to re-configure all of your role assignments. Edit /etc/default/cloudera-scm-agent and set CMF_AGENT_ARGS="--host_id xxx" where xxx is your old host id. You can find the old host id by clicking on your old host in the Hosts tab and looking at what is listed for Host ID. Restart your agents after changing /etc/default/cloudera-scm-agent. On the Hosts page, you should see all of your old hosts (the ones with roles assigned to them) have recent heartbeats and good health. You can delete the copies that have no heartbeats and no roles. If you don't see this, then let us know what you see so we can fix it. Now Cloudera Manager is correctly in contact with all of your hosts again. You can run the restart command through the Cloudera Manager UI. You can restart an individual service or the whole cluster by clicking on the appropriate dropdown menu (small button with a triangle). Thanks, Darren
... View more
- « Previous
- Next »