Member since
09-29-2015
29
Posts
20
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
282 | 08-18-2016 07:51 PM | |
427 | 08-18-2016 07:15 AM | |
5018 | 06-10-2016 04:20 PM | |
228 | 06-10-2016 04:17 PM | |
849 | 06-10-2016 02:27 PM |
02-15-2017
11:59 AM
As per the documentation, it's not supported yet.
OpenStack Supported Versions Cloudbreak was tested against the following versions of Red Hat Distribution of OpenStack (RDO):
Juno Kilo Liberty Mitaka
... View more
01-10-2017
06:24 PM
1 Kudo
EMR is doing HBase on S3 with WAL stored on local HDFS - a better approach is to do WAL on EBS or EFS. WAL on local HDFS is not really recommended as you can loose data if the node goes away.
... View more
11-09-2016
02:59 PM
1 Kudo
You have more information in the logs and Cloudformation template - nevertheless see this: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html - the The maximum number of addresses has been reached means that on AWS you are hitting you account limits. You should ask AWS to increase it.
... View more
11-04-2016
04:14 PM
3 Kudos
Yes, we did it - we will come up with an example next week, stay tuned. Feel free to go ahead with your thoughts on this, however this is what is coming: 1. Authentication from LDAP/AD 2. LDAP/AD group mappings to UAA scopes - as you know in Cloudbreak every resource has a scope - thus based on your LDAP/AD settings you can allow/restrict users to operations on resources 3. Visibility of resources based on LDAP/AD groups - this requires some code changes on our side and we are working on this currently. For 1-2 you can already do that by creating an UAA mapping table and generate the groups to UAA scopes Copy the group names to a file (groups.txt): select displayname from groups; (remove first space chars and empty lines)
Then you can use sigil to generate the SQL inserts for the external group mapping table. Save this into a file (sigil.template):
{{ range $k, $v := stdin|split "\n"}}
INSERT INTO external_group_mapping (group_id, external_group, added, origin) VALUES ((select id from groups where displayname='{{$v}}'), 'cn=admin,ou=scopes,dc=ad,dc=seq,dc=com', '2016-09-30 19:28:24.255', 'ldap');{{end}}
then: cat groups.txt | sigil -f sigil.template You can get the sigil tool from https://github.com/gliderlabs/sigil.
... View more
08-18-2016
07:51 PM
First of all the latest release which was using Docker on public clouds (AWS, GCP and Azure) was 1.2.3. The 1.3.0 or newer versions are not using Docker to run the Hadoop services. Anyway for 1.2.3 the answers are: 1. Containers were started with net=host, thus there was one container per VM - Docker was mostly used for packaging and distribution - thus every node had one container. You needed as many nodes as the size of the cluster was.
2. You can but the container was getting the full VM resources (see #1)
3. You need to install the Cloudbreak application (anywhere, that can be an EC2 instance for example but on-prem as well). The Cloudbreak application - note it's not the cluster - is composed of several micro-services, and these are running inside containers.
Can be GUI or CLI or API - every hostgroup can have different instance types, the cluster can be heteregenous. 4. http://sequenceiq.com/cloudbreak-docs/ 5. It depends on the number of nodes you'd like to provision. There are no additional costs on top of EC price thus yo ucan do a fairly easy math - multiply the number of nodes you think your cluster will have with the number of hours ... In Cloudbreak you can fully track usage costs on the Accounts tab.
... View more
08-18-2016
07:15 AM
1 Kudo
Salut Constantin
Currently this is tech preview due to some limitations - there are some changes undergoing within Mesos (unified containerizes, IP per container, reverse lookups, storage drivers, etc to name a few) - so expect a large change here.
Nevertheless if someone is happy with the current limitations (namely running with net=host) the code is stable and works well (we and a few others have test/dev clusters).
Should you want more details drop me a mail. Janos
... View more
08-17-2016
07:05 PM
1 Kudo
http://sequenceiq.com/cloudbreak-docs/latest/mesos/
... View more
06-21-2016
10:06 AM
1 Kudo
There is no one config - fits all setup. Like on any public providers where you have pre-set configs, in OpenStack you can create your own instance configs. Since Cloudbreak supports heterogenous clusters I suggest to create different Nova instances which fits best your different HDP components or workload (e.g. high memory for Spark, multi-core for compute heavy workloads, etc).
Re OpenStack version - for now we support stock Kilo and Juno. I suggest to check the docs: http://sequenceiq.com/cloudbreak-docs/latest/openstack/ If you don't have an OpenStack cluster yet you might consider checking these Ansible scripts: https://github.com/sequenceiq/openstack-ansible
... View more
06-15-2016
08:59 AM
In general if you check the default blueprints in Cloudbreak you will see that none of the Master hostgroups are doing any work in case you submit a job/query.etc - they only run master/server and UI components. All the work is done by the slave/worker hostgroups - where you have the datanodes or for example HBase region servers.
... View more
06-15-2016
08:56 AM
I assume you'd like to upscale a worker hostgroup - not a hostgroup which contains master components (like HIVE_SERVER) in 'host_group_master_2'. Master hostgroups can't be upscaled - if you'd like to do HA (for components which support that) than it's a different story.
Bottom line - you can upscale worker hostgroups, not master ones (as it does not make too much sense upscale master hostgroups anyway).
... View more
06-13-2016
03:38 PM
It seams like your account does not have right to create a VPC (you can check this on AWS console where you can try to create a new VPC). Alternatively you can reuse your company's VPC as well - http://sequenceiq.com/cloudbreak-docs/latest/aws/#infrastructure-templates and check Create a new subnet in an existing VPC or Use an existing subnet in an existing VPC.
... View more
06-10-2016
06:12 PM
It's not so easy to read over the formatting but apparently a few containers did not start for you (e.g. uaadb, pcdb, cbdb). Can you do a cbd kill and after a cbd regenerate a cbd start. You should have 12 containers (microservices) and among those the databases.
... View more
06-10-2016
04:26 PM
1 Kudo
For a size like this is fine - you can check these Ansible scripts in order to set up an OpenStack, than use Cloudbreak.
... View more
06-10-2016
04:20 PM
Hi guys. This is due to an old SQL driver - as stated above you can either change the driver or wait until the next release where the driver will be updated. Sorry for the inconvenience.
... View more
06-10-2016
04:17 PM
The container should not be updated - you mean to say the docker version? Contiamners can be updated with docker pull but you should not be concerned as an end user or operator about this.
Did you face some issues or is this an observation?
... View more
06-10-2016
04:09 PM
Can you please SSH to the box and run docker ps and send us the results. Also can you check if you see any exceptions in cbd logs cloudbreak ?
... View more
06-10-2016
02:27 PM
2 Kudos
You should have these ports open (as per the documentation) - also what's in your Profile (PUBLIC_IP/PRIVATE_ID)
SSH (22) Cloudbreak API (8080) Identity server (8089) Cloudbreak GUI (3000) User authentication (3001)
... View more
06-07-2016
08:26 AM
1 Kudo
What do you mean local machine - physical clusters? If yes, you can't - for that you can use Ambari. Cloudbreak is for provisioning clusters in the cloud. If you use OpenStack for virtualization within your datacenter/physical boxes than you can use Cloudbreak.
... View more
05-24-2016
07:53 PM
1 Kudo
Which version of CBD are you using (cbd version)? There was an issue in 1.2.2 (due to an updated systemd version) which has been fixed - download the latest 1.2.3 binary or do a cbd update. After that you should do a cbd kill && cbd regenerate && cbd start
... View more
05-24-2016
07:48 PM
2 Kudos
They should not be opened globally - they should be accessed by the Cloudbreak application. You can deploy the Cloudbreak application (with CBD) on the same VPC (and/or subnet) as where the HDP clusters are provisioned - in that case it should not be opened globally. If you use the hosted Cloudbreak application (cloudbreak.sequenceiq.com) that you will have to open globally - but it's not really recommended, as you should use your own Cloudbreak instance.
... View more
05-24-2016
03:22 PM
1 Kudo
All resources created by Cloudbreak can be modified on AWS site - for AWS makes no difference from where the API call or Cloudformation template is made, nor differentiates it whatsoever - so you either using a different account or IAM role as security groups created by Cloudbreak can be modified after. People often do this and open or close ports on long running clusters at the time it's needed. It's in the documentation - IMPORTANT 443 and 22 ports needs to be there in every security group otherwise Cloudbreak won't be able to communicate with the provisioned cluster - http://sequenceiq.com/cloudbreak-docs/latest/aws/#infrastructure-templates under Security groups.
... View more
05-23-2016
03:03 PM
1 Kudo
For the networks we don't really care - we support all these scenarios:
Create a new network and a new subnet Create a new subnet in an existing network Use an existing subnet in an existing network How you configure your existing network it's up to you.
For volumes we do do it through API or Heat templates and there is no out of the box customization available yet. You might want to do the LUKS through Cinder volume type creates and modify the template/code in Cloudbreak to reuse that encrypted volume.
For custom things (e.g. web server) - currently it's the default Ambari stack with all the OTB components/services - though Ambari supports custom stacks.
... View more
05-18-2016
05:29 PM
You can give it a try now on the hosted version or do these changes in your cbd Profile: export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3 And then restart cbd with: cbd kill && cbd regenerate && cbd start
... View more
05-18-2016
05:19 PM
@Liam MacInnes
Are you using RDS on AWS? If so can you try it now - the drivers are updated on the hosted version. If you are using CBD than you should change this in your profile:
export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3 And then restart cbd with: cbd kill && cbd regenerate && cbd start
... View more
05-18-2016
05:12 PM
Can you retry now on the hosted version - it should work now!
Also if you are using CBD than you should do the following:
In your Profile export DOCKER_TAG_CLOUDBREAK=1.2.6-rc.3 And then restart CBD with: cbd kill && cbd regenerate && cbd start
... View more
05-12-2016
04:55 PM
2 Kudos
Currently it's not supported, though with the BYOS provider you might be able do that (using the Swarm orchestrator).
... View more
04-13-2016
05:41 PM
Each cloud implementations (supported by default) uses the SPI, thus they can be used as references. There are other peoples who are doing/have done their own cloud integration - usually it takes 2 weeks tops on the Cloudbreak side.
... View more
09-29-2015
07:54 PM
1 Kudo
You don't have to disable anything. The containers are launchd with net=host which means they inherit the host's network. Azure does not support IPv6, thus there are no routable IPv6 addresses for the container. Beside Azure does not support IPv6 (this this can't be an issue) by default Docker is configured with IPv4 only.
This is definitely has nothing to do with IPv6 ...
... View more