Created on 02-05-2015 01:35 PM - edited 09-16-2022 02:20 AM
I am trying to deploy automatically the version 4.7.1 of Cloudera CDH using the Python API of the Cloudera Manager 5.3.1. I am following the example here:
https://github.com/cloudera/cm_api/blob/master/python/examples/auto-deploy/deploycloudera.py
Once that I init the cluster, create all the services that I need (Zookeeper, HDFS, MapReduce, and HBase) and start the cluster, all is ok except from a detail: All the services appear on Concerning Status because of the following issue:
Mismatched CDH versions: host has NONE but role expects 4
I've tried to update the CDH version of the cluster manually with:
cluster.update_cdh_version("4.7.1")
but that didn't update the CDH version of the hosts. Does anyone know to fix this problem?
Thanks
Created 02-06-2015 09:57 AM
Thanks Guatam and Darren for your suggestions. I made things work.
I am guessing that my troubles appeared because I was debugging step by step, creating and destroying cluster and services many times. Once that I started the entire process from beginning to end (after debugging each step), all seems to work and I can see CDH 4 version for each host.
Cheers
Javier
Created 02-05-2015 01:42 PM
Created 02-05-2015 01:47 PM
Created 02-06-2015 09:54 AM
Thanks Darren,
I am downloading, distributing, and activating the parcel for CDH 4.7.1 using the API commands:
myparcel.start_download()
myparcel.start_distribution()
myparcel.activate()
and I follow the progress in the Cloudera Manager 5.3.1, in the parcels section. I can't see any errors.
As in the example that I mention in my post, I check the state of parcels using:
if myparcel.state is not None and myparcel.state.errors:
raise IOError(str(myparcel.state.errors))
No exception is thrown.
Created 02-06-2015 09:57 AM
Thanks Guatam and Darren for your suggestions. I made things work.
I am guessing that my troubles appeared because I was debugging step by step, creating and destroying cluster and services many times. Once that I started the entire process from beginning to end (after debugging each step), all seems to work and I can see CDH 4 version for each host.
Cheers
Javier
Created 02-06-2015 12:56 PM
Created 02-06-2015 09:49 AM
Thanks Guatam
I am installing my hosts using the command:
cluster.host_install(user, hostnames, private_key)
And it works fine. The output that I get from the stdout is below. I can't see anything regarding packages though:
{
"agentUserId" : 0,
"allHostDnsErrors" : [ ],
"allHostDnsSuccesses" : [ 1, 2 ],
"allHostsDnsAvgDurationMillis" : 0,
"allHostsDnsCount" : 4,
"allHostsDnsMaxDurationMillis" : 1,
"etcHostsError" : null,
"etcHostsMessages" : [ ],
"etcKrbConfMessages" : [ ],
"extantInitdErrors" : [ ],
"groupData" : "hdfs:x:492:\nmapred:x:489:\nzookeeper:x:491:\noozie:x:485:\nhbase:x:484:\nhue:x:483:\ncloudera-scm:x:496:\nhadoop:x:495:hdfs,mapred,yarn\nhive:x:487:\nsqoop:x:494:sqoop2\nsqoop2:x:486:\nyarn:x:488:\nhttpfs:x:490:\n",
"hostDnsErrors" : [ ],
"hostname" : "myhostname.com",
"jceStrength" : 0,
"kernelVersion" : "2.6.32-431.29.2.el6.x86_64",
"kernelVersionException" : null,
"localHostIpError" : null,
"localhostIp" : "127.0.0.1",
"nowMillis" : 1423244510387,
"pythonVersionException" : null,
"pythonVersionOk" : true,
"rhelRelease" : "CentOS release 6.5 (Final)",
"runExceptions" : [ ],
"swappiness" : "0",
"swappinessException" : null,
"timeZone" : "UTC+00:00",
"transparentHugePagesDefrag" : null,
"transparentHugePagesEnabled" : null,
"transparentHugePagesException" : null,
"userData" : "hdfs:x:495:492:Hadoop HDFS:/var/lib/hadoop-hdfs:/bin/bash\nmapred:x:492:489:Hadoop MapReduce:/var/lib/hadoop-mapreduce:/bin/bash\nzookeeper:x:494:491:ZooKeeper:/var/run/zookeeper:/bin/false\noozie:x:487:485:Oozie User:/var/lib/oozie:/bin/false\nhbase:x:486:484:HBase:/var/run/hbase:/bin/false\nhue:x:485:483:Hue:/usr/share/hue:/bin/false\ncloudera-scm:x:497:496:Cloudera Manager:/var/lib/cloudera-scm-server:/sbin/nologin\nsqoop:x:491:494:Sqoop:/var/lib/sqoop:/bin/false\nsqoop2:x:488:486:Sqoop 2 User:/var/run/sqoop2:/sbin/nologin\nyarn:x:490:488:Hadoop Yarn:/var/lib/hadoop-yarn:/bin/bash\nhttpfs:x:493:490:Hadoop HTTPFS:/var/run/hadoop-httpfs:/bin/bash\n"
}