Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Custom configuration in core-site.xml

avatar
Explorer

I wanna add some custom config in core-site.xml.

 

So I follow this steps:

 

1. Cloudera Manager > Services > hdfs1 > Configuration

 

2. find "Cluster-wide Configuration Safety Valve for core-site.xml"  and add my custom

 

3 Deploy Client Configuration

 

Ok. I found that configuration in /etc/hadoop/conf.cloudera.hdfs1/core-site.xml have changed.

 

But there is no change in /etc/hadoop/conf.cloudera.mapreduce1/core-site.xml.

 

Can somebody tell me why and how to solve it.

 

Many thanks.

 

1 ACCEPTED SOLUTION

avatar

It sounds like you only re-deployed your HDFS client configuration, but you also need to re-deploy your MapReduce client configuration, since that has higher alternatives priority. It's usually best to use the cluster-wide command to deploy client configuration to make sure you don't miss something.

View solution in original post

5 REPLIES 5

avatar
Guru

Cloudera Manager usually applies some logic to where it places "safety valve" configurations like the ones you added.  It will only ship new config files to the roles where that configuration will be used.  In other words, it won't place configurations that only apply to the Namenode into the client configs and vice versa.  It looks like you applied your change to the HDFS safety valve, so CM will not modify the Mapreduce-specific configs as part of that.  You can go add your properties to the Mapreduce service if you'd like them to apply there.

avatar

It sounds like you only re-deployed your HDFS client configuration, but you also need to re-deploy your MapReduce client configuration, since that has higher alternatives priority. It's usually best to use the cluster-wide command to deploy client configuration to make sure you don't miss something.

avatar
Explorer
Great!

It works.

avatar
Contributor

I know this question is long answered, but would use it for a quick follow up :

 

If my cluster has an edge server where I only run Hue, how can I force the the push of client configuration to the server (e.g. for defaultFS), so that hdfs command line has the intended value ? When I deploy at cluster level I see it only pushed to servers with Yarn, Hive and HDFS roles...

 

Thanks

avatar
Hi Balta,

If you want your edge node to talk to YARN, add a Yarn Gateway role to that host. If you want it to talk to Hive, add a Hive Gateway. Etc.

When you deploy client configuration, it'll go to all roles of that service, including gateways.

Thanks,
Darren