Member since
07-30-2013
509
Posts
113
Kudos Received
123
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3578 | 07-09-2018 11:54 AM | |
| 2906 | 05-03-2017 11:03 AM | |
| 7413 | 03-28-2017 02:27 PM | |
| 2895 | 03-27-2017 03:17 PM | |
| 2428 | 03-13-2017 04:30 PM |
03-21-2014
11:28 AM
Are you looking at this guide? http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Managing-Clusters/cmmc_HDFS_hi_avail.html This guide talks about configuring HDFS for High Availability. It does not talk about installation. Did you already add a cluster with services like HDFS and ZooKeeper configured? I assumed from your post that you had an existing cluster. You enable High Availability after you've created a cluster. If you don't have such a cluster, then add one and configure the desired services, then follow the High Availability guide.
... View more
03-20-2014
12:41 PM
figured this out.These machine where subscribed to statellite yum repo I had configured earlier.Even though I had the new local yum repos these machines where still pointing to the old ones.I cleared the cache and it worked.Thanks all
... View more
03-19-2014
03:29 PM
1 Kudo
Hi Nishan, During the install wizard, there's a prompt for you to specify where you'd like the CM agent binaries to come from. It's on the same page that prompts you to select CDH version. By default, it will pick up the corresponding build from our public repos, but you can override that. Thanks, Darren
... View more
03-19-2014
08:54 AM
1 Kudo
It was a problem with specifying the TNS name.Got That resolved.Thank you all.
... View more
03-17-2014
07:17 AM
Cloudera Support was able to solve the case. The thrift server is set to die from a kill -9 upon error. The error was a java out of memory error when being scanned. We upped the thift server heap size to 4G and all is well.
... View more
03-16-2014
10:09 AM
If you can get to the web UI, then you can modify this setting under Administration > Settings > Ports and Addresses. If you can't, then run a curl command to update it via the API, while ssh'd in to the host (so your firewall doesn't get in the way): curl -X PUT -u "admin:admin" -i \ -H "content-type:application/json" \ -d '{ "items": [ { "name": "HTTP_PORT", "value": 8080 } ] }' \ http://<cm_host>:7180/api/v3/cm/config
... View more
03-14-2014
10:56 AM
Hi cor, Thanks for letting us know. This issue has since been fixed. Thanks, Darren
... View more
02-18-2014
05:30 PM
You are correct. I had an older version bin file. Thanks
... View more
02-18-2014
01:56 PM
Unfortunately, host names with capital letters will always hit this problem. CM respects the original host name capitalization, but hadoop converts it all to lowercase. You'll have to pick lowercase host names. You might also be able to change your agent.ini on each host to override the hostname to be the lowercase name, haven't tried that though. Thanks, Darren
... View more