- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Custom configuration in core-site.xml
- Labels:
-
Apache Hadoop
-
Cloudera Manager
Created on ‎12-08-2013 10:24 PM - edited ‎09-16-2022 01:51 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I wanna add some custom config in core-site.xml.
So I follow this steps:
1. Cloudera Manager > Services > hdfs1 > Configuration
2. find "Cluster-wide Configuration Safety Valve for core-site.xml" and add my custom
3 Deploy Client Configuration
Ok. I found that configuration in /etc/hadoop/conf.cloudera.hdfs1/core-site.xml have changed.
But there is no change in /etc/hadoop/conf.cloudera.mapreduce1/core-site.xml.
Can somebody tell me why and how to solve it.
Many thanks.
Created ‎12-09-2013 10:57 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It sounds like you only re-deployed your HDFS client configuration, but you also need to re-deploy your MapReduce client configuration, since that has higher alternatives priority. It's usually best to use the cluster-wide command to deploy client configuration to make sure you don't miss something.
Created ‎12-09-2013 06:50 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Cloudera Manager usually applies some logic to where it places "safety valve" configurations like the ones you added. It will only ship new config files to the roles where that configuration will be used. In other words, it won't place configurations that only apply to the Namenode into the client configs and vice versa. It looks like you applied your change to the HDFS safety valve, so CM will not modify the Mapreduce-specific configs as part of that. You can go add your properties to the Mapreduce service if you'd like them to apply there.
Created ‎12-09-2013 10:57 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It sounds like you only re-deployed your HDFS client configuration, but you also need to re-deploy your MapReduce client configuration, since that has higher alternatives priority. It's usually best to use the cluster-wide command to deploy client configuration to make sure you don't miss something.
Created ‎12-09-2013 06:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It works.
Created ‎03-08-2015 06:52 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I know this question is long answered, but would use it for a quick follow up :
If my cluster has an edge server where I only run Hue, how can I force the the push of client configuration to the server (e.g. for defaultFS), so that hdfs command line has the intended value ? When I deploy at cluster level I see it only pushed to servers with Yarn, Hive and HDFS roles...
Thanks
Created ‎03-08-2015 10:49 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
If you want your edge node to talk to YARN, add a Yarn Gateway role to that host. If you want it to talk to Hive, add a Hive Gateway. Etc.
When you deploy client configuration, it'll go to all roles of that service, including gateways.
Thanks,
Darren
