Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Cloudbreak not successfully launching clusters on AWS

Highlighted

Cloudbreak not successfully launching clusters on AWS

Super Guru

I have been using accounts.sequenceid.com for some time now. launched 100s of clusters. I tried today and many of the services are failing to install. I have wide open VPC with wide open security (basically no security). I the last day I have some others on HCC identify not able to launch due to similar reason. here is the log ambari during data node install:

2016-05-17 01:10:22,916 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-17 01:10:22,917 - Group['hadoop'] {}
2016-05-17 01:10:22,920 - Adding group Group['hadoop']
2016-05-17 01:10:23,126 - Group['users'] {}
2016-05-17 01:10:23,126 - Group['knox'] {}
2016-05-17 01:10:23,126 - Adding group Group['knox']
2016-05-17 01:10:23,243 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:23,243 - Adding user User['hive']
2016-05-17 01:10:23,541 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 01:10:23,541 - Adding user User['oozie']
2016-05-17 01:10:23,666 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 01:10:23,667 - Adding user User['ambari-qa']
2016-05-17 01:10:23,787 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:23,787 - Adding user User['hdfs']
2016-05-17 01:10:23,908 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:23,908 - Adding user User['knox']
2016-05-17 01:10:24,032 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:24,033 - Adding user User['mapred']
2016-05-17 01:10:24,226 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:24,227 - Adding user User['hbase']
2016-05-17 01:10:24,347 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 01:10:24,348 - Adding user User['tez']
2016-05-17 01:10:24,478 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:24,478 - Adding user User['zookeeper']
2016-05-17 01:10:24,605 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-05-17 01:10:24,606 - Adding user User['falcon']
2016-05-17 01:10:24,730 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:24,730 - Adding user User['sqoop']
2016-05-17 01:10:24,855 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:24,855 - Adding user User['yarn']
2016-05-17 01:10:24,979 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:24,979 - Adding user User['hcat']
2016-05-17 01:10:25,106 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-05-17 01:10:25,106 - Adding user User['ams']
2016-05-17 01:10:25,230 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-17 01:10:25,236 - Writing File['/var/lib/ambari-agent/tmp/changeUid.sh'] because it doesn't exist
2016-05-17 01:10:25,236 - Changing permission for /var/lib/ambari-agent/tmp/changeUid.sh from 644 to 555
2016-05-17 01:10:25,237 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-05-17 01:10:25,342 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-05-17 01:10:25,342 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-05-17 01:10:25,343 - Creating directory Directory['/tmp/hbase-hbase'] since it doesn't exist.
2016-05-17 01:10:25,343 - Changing owner for /tmp/hbase-hbase from 0 to hbase
2016-05-17 01:10:25,343 - Changing permission for /tmp/hbase-hbase from 755 to 775
2016-05-17 01:10:25,344 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-17 01:10:25,344 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-05-17 01:10:25,453 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-05-17 01:10:25,453 - Group['hdfs'] {}
2016-05-17 01:10:25,453 - Adding group Group['hdfs']
2016-05-17 01:10:25,568 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2016-05-17 01:10:25,569 - Modifying user hdfs
2016-05-17 01:10:25,686 - Directory['/etc/hadoop'] {'mode': 0755}
2016-05-17 01:10:25,686 - Creating directory Directory['/etc/hadoop'] since it doesn't exist.
2016-05-17 01:10:25,687 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-05-17 01:10:25,687 - Creating directory Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] since it doesn't exist.
2016-05-17 01:10:25,687 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs
2016-05-17 01:10:25,687 - Changing group for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hadoop
2016-05-17 01:10:25,687 - Changing permission for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 755 to 777
2016-05-17 01:10:25,698 - Repository['HDP-2.4'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2016-05-17 01:10:25,708 - File['/etc/yum.repos.d/HDP.repo'] {'content': InlineTemplate(...)}
2016-05-17 01:10:25,709 - Writing File['/etc/yum.repos.d/HDP.repo'] because it doesn't exist
2016-05-17 01:10:25,710 - Repository['HDP-UTILS-1.1.0.20'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2016-05-17 01:10:25,713 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': InlineTemplate(...)}
2016-05-17 01:10:25,713 - Writing File['/etc/yum.repos.d/HDP-UTILS.repo'] because it doesn't exist
2016-05-17 01:10:25,713 - Package['unzip'] {}
2016-05-17 01:10:33,898 - Skipping installation of existing package unzip
2016-05-17 01:10:33,898 - Package['curl'] {}
2016-05-17 01:10:33,922 - Skipping installation of existing package curl
2016-05-17 01:10:33,922 - Package['hdp-select'] {}
2016-05-17 01:10:33,946 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2016-05-17 01:11:12,564 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-17 01:11:12,565 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-17 01:11:12,570 - Package['rpcbind'] {}
2016-05-17 01:11:12,647 - Installing package rpcbind ('/usr/bin/yum -d 0 -e 0 -y install rpcbind')
2016-05-17 01:12:34,774 - Package['hadoop_2_4_*'] {} 

2016-05-17 01:12:34,785 - Installing package hadoop_2_4_* ('/usr/bin/yum -d 0 -e 0 -y install 'hadoop_2_4_*'')

1 REPLY 1

Re: Cloudbreak not successfully launching clusters on AWS

Contributor

Can you retry now on the hosted version - it should work now! Also if you are using CBD than you should do the following: In your Profile export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3

And then restart CBD with:

cbd kill && cbd regenerate && cbd start

Don't have an account?
Coming from Hortonworks? Activate your account here