Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Issue while using custom image with cloudbreak 2.7.1

Highlighted

Issue while using custom image with cloudbreak 2.7.1

New Contributor

I am trying to use custom image for building HDP cluster with cloudbreak. I created this custom image using my own pipeline via packer, not using https://github.com/hortonworks/cloudbreak-images repository. I created json file as describe here https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.7.1/content/images/index.html

{
  "images": {
    "base-images": [
      {
        "date": "2018-07-13",
        "description": "Cloudbreak custome base image",
        "images": {
          "gcp": {
            "default": "testbucket/clb-image.tar.gz"
          }
        },
        "os": "centos7",
        "os_type": "redhat7",
        "uuid": "05504d24-4b14-4d73-b133-a0983aa3f65c"
      }
    ]
},
  "versions": {
    "cloudbreak": [
      {
        "images": [
          "05504d24-4b14-4d73-b133-a0983aa3f65c"
        ],
        "versions": [
          "2.7.1"
        ]
      }
    ]
  }
}

Made this file available via http to cloudbreak. Entry of my custom catalog is as below and able to view image details uuid etc in cloudbreak

cust-image http://xx.xx.xx.xx/custom-image.json

Uploaded custom image(tar.gz) to GCP bucket and made it readable by setting ACL entry on bucket.

While creating cluster I am selecting cust-image Catalog and image mention in it, example "testbucket/clb-image.tar.gz". However stack creation fails at Step 4 i.e. prepareImage. Cloudbreaklow details, which does not really indicate what is the issue.

/cbreak_cloudbreak_1 | 2018-08-01 08:12:24,005 [reactorDispatcher-35] prepareImage:90 ERROR c.s.c.c.g.GcpProvisionSetup - [owner:d2546a45-1fec-46d5-bd70-0fb4bc044404] [type:STACK] [id:4] [name:hdptest] [flow:1f81aab1-52ac-45fc-9969-0030e33d4192] [tracking:] Error occurred on 4 stack during the setup: 413 Request Entity Too Large
/cbreak_cloudbreak_1 | {
/cbreak_cloudbreak_1 |   "code" : 413,
/cbreak_cloudbreak_1 |   "errors" : [ {
/cbreak_cloudbreak_1 |     "domain" : "global",
/cbreak_cloudbreak_1 |     "message" : "Copy spanning locations and/or storage classes could not complete within 30 seconds. Please use the Rewrite method (https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite) instead.",
/cbreak_cloudbreak_1 |     "reason" : "uploadTooLarge"
/cbreak_cloudbreak_1 |   } ],
/cbreak_cloudbreak_1 |   "message" : "Copy spanning locations and/or storage classes could not complete within 30 seconds. Please use the Rewrite method (https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite) instead."
/cbreak_cloudbreak_1 | }

Has anyone face the similar issue?

Custom image documentation page does not clearly specify pre-reqs on using this feature.

10 REPLIES 10

Re: Issue while using custom image with cloudbreak 2.7.1

Rising Star

Hi @Sachin Shinde

Fist of all, why have you decided to go ahead without our image burning process?
It may happen that the custom image burning process will skip some of the required prerequisites and on the other hand our solution is extendable with custom logic.

What is the size of the created image?

We haven't experienced similar with our images, they are between 15 and ~30 Gigs.

Br,

Tamas

Re: Issue while using custom image with cloudbreak 2.7.1

New Contributor

Hi @Tamas Bihari thanks for reply. We have our own existing pipeline for Cloud Custom Image Creation, so want to see if can use same for cloudbreak. tar.gz image created is having approx size of 1GB only.

Are there any specific pre-reqs to be done on GCP project/bucket where image is uploaded?

Regards,

Sachin Shinde

Re: Issue while using custom image with cloudbreak 2.7.1

Rising Star

Our images in the tar.gz format also around 1 Gig (fox example: https://storage.cloud.google.com/sequenceiqimage/cb-hdp--1808011112.tar.gz?authuser=1&folder&organiz...

I haven't seen any special config, but our tar.gz is public:

83500-screen-shot-2018-08-01-at-62437-pm.png

The copy method is called on the GCP SDK, but that is timed out for you, so probably was able to reach the source object. As I checked the rewrite method shouldn't be timed out, but before starting to find the issue in the code base, could you please check to start a cluster in a different region?

Our processes burns the images and at the first use of the image, Cloudbreak copy the image from the tar.gz format. So we should use the same method call for every new image and haven't experienced the same issue.

 Storage.Objects.Copy copy(@NotNull String sourceBucket,
                                 @NotNull String sourceObject,
                                 @NotNull String destinationBucket,
                                 @NotNull String destinationObject,
                                 com.google.api.services.storage.model.StorageObject content)

Re: Issue while using custom image with cloudbreak 2.7.1

New Contributor

Thanks @Tamas Bihari for details. I will try to spin-up cluster in different region and will update.

Meanwhile I updated custom image manually into project in which cluster need to create and cloudbreak pickup that custom image for instances, as you mention "processes burns the images and at the first use of the image"

I am also testing cloudbreak-image (https://github.com/hortonworks/cloudbreak-images/tree/master/saltstack/base/salt) repo/pipeline to build custom image using our own custom image as base in packer.json. This way we can have our custom image along with required prerequisites for cloubreak as well. Any specific reason why RHEL7 build not included for GCP in packer.json? Known issues any?

Re: Issue while using custom image with cloudbreak 2.7.1

Rising Star

Hi @Sachin Shinde

Most of the time we worked with Centos and we have RedHat based images for many providers due to customer requirement.
We haven't known about any issue.

Re: Issue while using custom image with cloudbreak 2.7.1

New Contributor

Hi @Tamas Bihari,

I am able to build RedHat image for GCP cloud provider by doing few changes in cloudbreak-images repository code. One of the challenge I am facing is how to pass custom metadata/user-data for instances which spin-up by cloudbreak during cluster creation. This additional custom user-data needed to set few things during first instance boot.

Is there any way I can pass this custom data which cloudbreak can pass during cluster instance creation?

Regards,

Sachin Shinde

Re: Issue while using custom image with cloudbreak 2.7.1

Rising Star

Hi @Sachin Shinde,

Cloudbreak has a recipe functionality that is designed to run custom bash/python scripts at different lifecycles of the cluster:
https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.7.1/content/recipes/index.html

Br,

Tamas

Re: Issue while using custom image with cloudbreak 2.7.1

New Contributor

Hi @Tamas Bihari, Yes i had a look on it and its good option to run custom scripts after instance boots up.

My requirement is to carry out activity during instance boot phase and which uses custom metadata pass during instance creation phase. Example setting hostname of instance base on FQDN custom metadata pass during instance creation. That's the reason I am looking of option via cloudbreak to pass this custom metadata.

Regards,

Sachin Shinde

Re: Issue while using custom image with cloudbreak 2.7.1

Rising Star

Hi @Sachin Shinde,

Cloudbreak has a functionality to configure custom hostname on the provisioned clusters, but it hasn't been documented yet because it is only available through our CLI tool and it could be configured in the cluster JSON template like ("customDomain" key):

{
  "general": {
    "credentialName": "mycred",
    "name": "myCluster-1"
  },
  "customDomain": {
    "customDomain": "hortonworks.com",
    "customHostname": "prod"
  },
  "placement": {
    "availabilityZone": "eu-west-1a",
    "region": "eu-west-1"
  },
........

Other possible solution on AWS:
https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.7.1/content/hostnames/index.html

On the other hand it is not a good idea to manipulate the hostnames manually during the instance creation, because Cloudbreak provisioned instances has a custom DHCP hook script that will configure our underlying DNS(Unbound) with the configs that come from the DHCP server.

Br,

Tamas

Don't have an account?
Coming from Hortonworks? Activate your account here