Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Mixed parcels and packages - cluster unable to restart

avatar
Explorer

Hi all

 

So I managed to hose up my cluster. Here is what I did:

  • Installed CDH 5.11.1 on my Centos7 cluster using RPMs via CM and everything is working
  • One of my devs asked for Spark2 on the cluster
  • Attempted to install the Spark2 parcel on the cluster but it complained about missing CDH5 parcel dependency
  • Installed the CDH 5.11.1 parcel (big mistake!)
  • Installed Spark2 parcel
  • Spark2 stuff didn't seem to be working right due to library issues. Probably because parcels and RPMs were both installed
  • Removed the CDH parcel which left the cluster in a funky state

 

So all my original services are stil listed in CM but are stopped. Attempting to start any of them pops up an error about requiring additional parcels. The originally installed CDH5 RPMs are still installed. From what I can tell, CM is confused and thinks it should still be using the parcels to provide the services.

 

How can I get CM to use the installed RPMs?

 

Help!

1 ACCEPTED SOLUTION

avatar
Explorer

Ok I fixed the issue. Noted for for any other poor soul who does the same thing.

 

I only removed the CDH parcel and left the Spark2. This caused all the issues with CM unable to restart the cluster. Once I deactived the Spark2 parcel, CM was happy. It pushed the new configs out and was able to start the cluster.

View solution in original post

2 REPLIES 2

avatar
Explorer

I found in /etc/hadoop/conf.cloudera.hdfs/__cloudera_metadata__ a configuration item that says "parcels_in_use".

 

There are also references to the parcel install (/opt/cloudera/parcels) in hadoop-env.sh environment variables.

 

Can I change these setting on all the servers to fix the issue?

avatar
Explorer

Ok I fixed the issue. Noted for for any other poor soul who does the same thing.

 

I only removed the CDH parcel and left the Spark2. This caused all the issues with CM unable to restart the cluster. Once I deactived the Spark2 parcel, CM was happy. It pushed the new configs out and was able to start the cluster.