Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDP 3.0 Cloudbreak Deployment possible?

avatar
Contributor

Hi,

I just got a Cloudbreak 2.7.1 deployer server running. The press release for HDP 3.0 mentioned that is was possible to deploy 3.0 using Cloudbreak but HDP 3.0 isn't an available option under 2.7.1 (HDP 2.6 only). Do I need to upgrade Cloudbreak, add an MPack or how should I deploy HDP 3.0 via Cloudbreak?

Paul

1 ACCEPTED SOLUTION

avatar
Expert Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login
14 REPLIES 14

avatar
Expert Contributor
hide-solution

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community

Register/Login

avatar
Contributor

Thanks dsun, that has worked for me. Note I used Dominika's blueprint below but don't know if it makes any difference.

avatar
Expert Contributor
@Paul NorrisGlad it helped. If you look at the JSON contents, both Dominika & I are referring to the same blueprint.

avatar

@Paul Norris It is possible to use Cloudbreak 2.7.1 to deploy HDP 3.0; However, Cloudbreak 2.7.1 does not include any default blueprints for HDP 3.0 and so if you want to use HDP 3.0 you must first:

1) create an HDP 3.0 blueprint. Here is an example:

{
  "Blueprints": {
    "blueprint_name": "hdp30-data-science-spark2-v4",
    "stack_name": "HDP",
    "stack_version": "3.0"
  },
  "settings": [
    {
      "recovery_settings": []
    },
    {
      "service_settings": [
        {
          "name": "HIVE",
          "credential_store_enabled": "false"
        }
      ]
    },
    {
      "component_settings": []
    }
  ],
  "configurations": [
    {
      "core-site": {
        "fs.trash.interval": "4320"
      }
    },
    {
      "hdfs-site": {
        "dfs.namenode.safemode.threshold-pct": "0.99"
      }
    },
    {
      "hive-site": {
        "hive.exec.compress.output": "true",
        "hive.merge.mapfiles": "true",
        "hive.server2.tez.initialize.default.sessions": "true",
        "hive.server2.transport.mode": "http"
      }
    },
    {
      "mapred-site": {
        "mapreduce.job.reduce.slowstart.completedmaps": "0.7",
        "mapreduce.map.output.compress": "true",
        "mapreduce.output.fileoutputformat.compress": "true"
      }
    },
    {
      "yarn-site": {
        "yarn.acl.enable": "true"
      }
    }
  ],
  "host_groups": [
    {
      "name": "master",
      "configurations": [],
      "components": [
        {
          "name": "APP_TIMELINE_SERVER"
        },
        {
          "name": "HDFS_CLIENT"
        },
        {
          "name": "HISTORYSERVER"
        },
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "HIVE_METASTORE"
        },
        {
          "name": "HIVE_SERVER"
        },
        {
          "name": "JOURNALNODE"
        },
        {
          "name": "MAPREDUCE2_CLIENT"
        },
        {
          "name": "METRICS_COLLECTOR"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NAMENODE"
        },
        {
          "name": "RESOURCEMANAGER"
        },
        {
          "name": "SECONDARY_NAMENODE"
        },
        {
          "name": "LIVY2_SERVER"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "SPARK2_JOBHISTORYSERVER"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "YARN_CLIENT"
        },
        {
          "name": "ZEPPELIN_MASTER"
        },
        {
          "name": "ZOOKEEPER_CLIENT"
        },
        {
          "name": "ZOOKEEPER_SERVER"
        }
      ],
      "cardinality": "1"
    },
    {
      "name": "worker",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "DATANODE"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    },
    {
      "name": "compute",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    }
  ]
}

2) Upload the blueprint to Cloudbreak (you can paste it under the Blueprints menu item).

3) When creating a cluster:

- Under General Configuration, select Platform Version > HDP-3.0 and then your blueprint should appear under Cluster Type

- Under Image Settings, specify Ambari 2.7 and HDP 3.0 public repos (you can find them in Ambari 2.7 docs: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-installation/content/ch_obtaining....

avatar
Contributor

Thanks, Dominika, I used your Blueprint and it didn't seem to need me to add the repo's, it had the correct information there automatically.

avatar

Great! I wasn't sure if Cloudbreak would do it correctly.

avatar

Hello Dominika, The blueprint in the example given was for Data Science: Apache Spark 2, Apache Zeppelin.

Do you have a sample blueprint for HDP 3.0 - EDW-ETL: Apache Hive, Apache Spark 2 which I can run on Cloudbreak 2.7.x?

avatar
@Pushpak Nandi I do not have any EDW-ETL blueprint for HDP 3.1. Last time I heard the plan was to only ship EDW-Analytics with HDP 3.1.

avatar