- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
HDFS client configuration is not updated using 'deploy client configration'
- Labels:
-
Apache Hadoop
-
Cloudera Manager
-
HDFS
Created on ‎06-09-2014 03:31 AM - edited ‎09-16-2022 01:59 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, we have NN HA on quorum Journal.
Created ‎06-09-2014 10:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Vikram Srivastava helped me in google groups. Here is an explanation:
The alternatives priority for HDFS is by default configured lower than MapReduce, so deploying HDFS client configs only will not update what /etc/hadoop/conf points to.
Created ‎06-09-2014 04:08 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This problem is related only to HDFS service.
I did deploy client conf of MapReduce service. It updates client conf mapred-site.xml and hdfs-site.xml
I do see updated hdfs-site.xml
The other problem with HDFS service is that i can't DELETE any role (DN, Gateway, Journal node). Cloudera manager just starts to consume 100% cPU and jstack reports therad dead lock...
Created ‎06-09-2014 10:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Vikram Srivastava helped me in google groups. Here is an explanation:
The alternatives priority for HDFS is by default configured lower than MapReduce, so deploying HDFS client configs only will not update what /etc/hadoop/conf points to.
