Reply
Highlighted
Explorer
Posts: 9
Registered: ‎05-12-2016
Accepted Solution

Re-Installation of Cloudera manager and recovering existing parcel and HDFS

Hi All,

I'm in serious trouble. I was working on 

Node Configuration : Operating System

CentOS 6.7 (Final)

Cluster

9 Node cluster 

Cluster : Hadoop Distribution

Cloudera

Cluster : Hadoop Distribution Version

CDH 5.4.2, Parcels

Cluster : HDFS Capacity

2.7 TB

Cluster : YARN Configuration

56 v-cores and 70 GB of memory

Now today i tried to upgrade the pracel. Here in my organization there is a proxy update I made th change in CSM and I donwloaded and succefully upgraded to CDH 5.4.10 [CDH-5.4.10-1.cdh5.4.10.p0.16]

Everything was working fine. But I was trying to upgrade cloudera manager iteself. Unfortunetly after that I failed to install/upgrde cloudera agent in nodes. So i followed kind of bilndly  this post

So I ran 

rm -vRf /etc/yum.repos.d/cloudera* /etc/cloudera-*
rm -vRf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/cloudera*
rm -vRf /var/log/cloudera-*
yum remove cloudera*
yum clean all

Now after that I'm not completely lost I was not able to install or upgrade CSM. I have setup proxy but still its failing. 

I want to know whether I'll be able to recover existing HDFS data.

Kindly tell me. I have not deleted the namenode, secondary namenode or datanaode files from local disk.

Cloudera Employee
Posts: 275
Registered: ‎07-08-2013

Re: Re-Installation of Cloudera manager and recovering existing parcel and HDFS

[ Edited ]

To iterate what you've run;

# Remove Cloudera yum repo < see [1] how to install this
rm -vRf /etc/yum.repos.d/cloudera* /etc/cloudera-*

# remove Cloudera shared libraries, and all the cloudera yum cached packages files
# /var/lib/cloudera-scm-agent also contains the UUID [3] - so you might have also cleared this.
# what this effectively mean that once you re-install the agents packages the aget/host will not be associated with any clusters.
# we could fix this but you'll need you uuid from the database. rm -vRf /usr/share/cmf /var/lib/cloudera* /var/cache/yum/cloudera*
# remove Cloudera logs
rm -vRf /var/log/cloudera-*
# Remove cloudera yum/rpm packages
yum remove cloudera*

# clean up various things which accumulate in the yum cache directory over time. More complete details can be found in the Clean Options section below. yum clean all

 

But I was trying to upgrade cloudera manager iteself. Unfortunetly after that I failed to install/upgrde cloudera agent in nodes.

Did you fail to upgrade Cloudera Manager Server and Agents? 

What version of Cloudera Manager you are looking to upgrade to?

What is the value for your 'com.cloudera.cmf.db.type' in the /etc/cloudera-scm-server/db.properties file.

 

From the supplied information, you can still re-install Cloudera Manager following the installation instructions [1], and since you're behind proxy you could download [2] the .rpms and install them manually.

 

 

 

[1] http://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_install_path_b.html

[2] ie: RedHat 6 variant CM packages - http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5/RPMS/x86_64/

[3] Directory that store Cloudera Manager Agent state that persists across instances of the agent process and system reboots. The agent's UUID is stored here. Default: /var/lib/cloudera-scm-agent.

 

Explorer
Posts: 9
Registered: ‎05-12-2016

Re: Re-Installation of Cloudera manager and recovering existing parcel and HDFS

[ Edited ]

Michalis wrote:

 

Did you fail to upgrade Cloudera Manager Server and Agents? 

Yes

What version of Cloudera Manager you are looking to upgrade to?

latest [now I'm in Version: Cloudera Express 5.7.0 (#76 built by jenkins on 20160401-1334 git: ec0e7e69444280aa311511998bd83e8e6572f61c)]

What is the value for your 'com.cloudera.cmf.db.type' in the /etc/cloudera-scm-server/db.properties file.

it is com.cloudera.cmf.db.type=postgresql

 

Explorer
Posts: 9
Registered: ‎05-12-2016

Re: Re-Installation of Cloudera manager and recovering existing parcel and HDFS

I was able to recover by changing the cluster id in the VERSION file stored locally in the disk of each nodes.

Announcements