Member since
05-21-2016
5
Posts
0
Kudos Received
0
Solutions
05-26-2016
03:38 AM
Spark cluster also built successfully with cbd1.2.3!
... View more
05-26-2016
03:14 AM
Thanks @Janos Matyas I updated the cbd to 1.2.3 and issued the following commands cbd kill && cbd regenerate && cbd start It worked perfectly and built the cluster. Thanks for the help! , @Janos Matyas Thank you so much Janos, I updated it to 1.2.3 and issued the commands cbd kill && cbd regenerate && cbd start it worked perfectly and created a cluster, thanks! I am building a spark cluster, with sample blueprint, I will post my experience with spark cluster as well here.
... View more
05-25-2016
05:28 AM
Thanks Janos! I was using the CBD 1.2.3 I did the below in the image cbd doctor ===> Deployer doctor: Checks your environment, and reports a diagnose. uname: Linux ip-xxxxxxx.ec2.internal xxxxxxxxxx.x86_64 #1 SMP Sat Jan 23 04:54:55 EST 2016 x86_64 x86_64 x86_64 GNU/Linux local version:1.2.2 latest release:1.2.3 [WARN] !!!!!!!!!!!!!!!!!!!!!!!!!!!! [WARN] Your version is outdated [WARN] !!!!!!!!!!!!!!!!!!!!!!!!!!!! [WARN] Please update it by: cbd update docker command exists: OK docker client version: 1.9.1 docker client version: 1.9.1 ping 8.8.8.8 on host: OK ping github.com on host: OK ping 8.8.8.8 in container: OK ping github.com in container: OK [cloudbreak@ip-xxxxxxxxxxx cloudbreak-deployment]$ cbd update Updating /usr/bin/cbd from url: https://github.com/sequenceiq/cloudbreak-deployer/releases/download/v1.2.3/cloudbreak-deployer_1.2.3_Linux_x86_64.tgz mv: try to overwrite '/usr/bin/cbd', overriding mode 0755 (rwxr-xr-x)? y [cloudbreak@ip-xxxxxxx cloudbreak-deployment]$ cbd update I have done the cbd regenerate as well. I will try in the order you advised(cbd kill && cbd regenerate && cbd start) and see what happens. I will post my findings here. Thanks!
... View more
05-21-2016
04:59 AM
I used us-east-1 public cloud image ami-5e20c033 to provision the cloud deployer container, performed a cbd update too.
... View more
05-21-2016
04:57 AM
I created the role based credentials, and tried to create a cluster M3.large ec2s with the template(miniviable-aws) and hdp-small-default blueprint, I got the below event logs 5/20/2016 11:08:33 PM finaltry - create in progress: Creating infrastructure5/20/2016 11:10:41 PM finaltry - update in progress: Infrastructure creation took 127 seconds5/20/2016 11:10:43 PM finaltry - available: Infrastructure metadata collection finished5/20/2016 11:12:40 PM finaltry - update in progress: Bootstrapping infrastructure cluster5/20/2016 11:13:19 PM finaltry - update in progress: Setting up infrastructure metadata5/20/2016 11:13:19 PM finaltry - update in progress: Starting Ambari cluster containers5/20/2016 11:13:50 PM finaltry - update in progress: Starting Ambari cluster5/20/2016 11:17:02 PM finaltry - update in progress: Building Ambari cluster; Ambari ip:xx.xx.xx.xxxx 5/20/2016 11:44:53 PM finaltry - create failed: Ambari cluster could not be created. Reason: com.sequenceiq.cloudbreak.service.cluster.AmbariOperationFailedException: Cluster installation failed to complete, please check the Ambari UI for more details. You can try to reinstall the cluster with a different blueprint or fix the failures in Ambari and sync the cluster with Cloudbreak later. I got this error, when I saw the error log in ambari, Python script has been killed due to timeout after waiting 1800 secs So, I changed the timeout to 3600 (agent.package.install.task.timeout=3600) in /etc/ambari-server/conf/ambari.properties in the container and restarted the installations individually in Ambari, MySQL server installation takes forever. Is there any workaround please? any help is appreciated.Thanks!
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak