<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: HDP 3.0 Cloudbreak Deployment possible? in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199134#M161177</link>
    <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/10146/dbialek.html" nodeid="10146"&gt;@Dominika Bialek&lt;/A&gt;, few queries:&lt;/P&gt;&lt;P&gt;1. Can you please supply a custom blueprint for &lt;/P&gt;&lt;P&gt;Data Science: &lt;STRONG&gt;Apache Spark 2, Apache Zeppelin/&lt;/STRONG&gt;EDW-Analytics with HDP 3.1?&lt;/P&gt;&lt;P&gt;2. Also, if I use the custom blueprint (Data Science: &lt;STRONG&gt;Apache Spark 2, Apache Zeppelin with &lt;/STRONG&gt;HDP3.0) supplied by you in this thread and try to create a 3 node cluster, there is a timeout happening due to which cluster creation is failing every time (this is not happening with HDP2.6 default blueprint).&lt;/P&gt;&lt;BR /&gt;&lt;UL&gt;&lt;LI&gt;Operation timed out. Failed to find all '3' Ambari hosts. Stack: '34'
2/6/2019, 4:24:20 PM
&lt;/LI&gt;&lt;LI&gt;Building Ambari cluster; Ambari ip:172.31.90.36
2/6/2019, 4:04:06 PM
&lt;/LI&gt;&lt;LI&gt;Starting Ambari cluster services
2/6/2019, 4:02:12 PM
&lt;/LI&gt;&lt;LI&gt;Setting up infrastructure metadata
2/6/2019, 4:02:11 PM
&lt;/LI&gt;&lt;LI&gt;Bootstrapping infrastructure cluster
2/6/2019, 4:01:45 PM
&lt;/LI&gt;&lt;LI&gt;Infrastructure successfully provisioned
2/6/2019, 4:01:45 PM
&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;Snapshot of the error from the log:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;cloudbreak_1  | 2019-02-06 13:00:05,631 [reactorDispatcher-42] pollWithTimeout:56 INFO  c.s.c.s.PollingService - [owner:d96fcce1-a991-4cf7-aa0b-6d186dc764bc] [type:CLUSTER] [id:34] [name:dip-test-cluster-hdp3x] [flow:84633730-dade-402b-b06b-8adf59f989a3] [tracking:53e509cd-6c8e-4c78-8d9f-2e79f6ec951e] Poller timeout.&lt;BR /&gt;cloudbreak_1  | 2019-02-06 13:00:05,632 [reactorDispatcher-42] buildCluster:182 ERROR c.s.c.s.c.a.AmbariClusterSetupService - [owner:d96fcce1-a991-4cf7-aa0b-6d186dc764bc] [type:CLUSTER] [id:34] [name:dip-test-cluster-hdp3x] [flow:84633730-dade-402b-b06b-8adf59f989a3] [tracking:53e509cd-6c8e-4c78-8d9f-2e79f6ec951e] Error while building the Ambari cluster. Message Operation timed out. Failed to find all '3' Ambari hosts. Stack: '34', throwable: {}&lt;BR /&gt;cloudbreak_1  | com.sequenceiq.cloudbreak.service.cluster.ambari.AmbariHostsUnavailableException: Operation timed out. Failed to find all '3' Ambari hosts. Stack: '34'&lt;/P&gt;&lt;P&gt;Can you please advise?&lt;/P&gt;</description>
    <pubDate>Wed, 06 Feb 2019 22:24:10 GMT</pubDate>
    <dc:creator>pushpak_nandi</dc:creator>
    <dc:date>2019-02-06T22:24:10Z</dc:date>
    <item>
      <title>HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199121#M161164</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I just got a Cloudbreak 2.7.1 deployer server running. The press release for HDP 3.0 mentioned that is was possible to deploy 3.0 using Cloudbreak but HDP 3.0 isn't an available option under 2.7.1 (HDP 2.6 only). Do I need to upgrade Cloudbreak, add an MPack or how should I deploy HDP 3.0 via Cloudbreak?&lt;/P&gt;&lt;P&gt;Paul&lt;/P&gt;</description>
      <pubDate>Wed, 25 Jul 2018 02:04:32 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199121#M161164</guid>
      <dc:creator>paul_norris</dc:creator>
      <dc:date>2018-07-25T02:04:32Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199122#M161165</link>
      <description>&lt;P&gt;Paul,&lt;/P&gt;&lt;P&gt;I ran into the same issue with CB 2.7.1.  I'm sure there must be a better way to get it resolved, but here are the steps I used to create an HDP3.0 blueprint first, then further create an HDP 3.0 cluster by using CB 2.7.1:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Step1: Click 'Blueprints' on the left navigation pane, then click 'CREATE BLUEPRINT', input the name, for instance 'HDP 3.0 - Data Science: Apache Spark 2, Apache Zeppelin'&lt;/STRONG&gt;&lt;STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="83406-step1.png" style="width: 1762px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/17678i3028999E82905B8E/image-size/medium?v=v2&amp;amp;px=400" role="button" title="83406-step1.png" alt="83406-step1.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Step2: Add the following JSON into the 'Text' field, and click 'CREATE'&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;{
  "Blueprints": {
    "blueprint_name": "hdp30-data-science-spark2-v4",
    "stack_name": "HDP",
    "stack_version": "3.0"
  },
  "settings": [
    {
      "recovery_settings": []
    },
    {
      "service_settings": [
        {
          "name": "HIVE",
          "credential_store_enabled": "false"
        }
      ]
    },
    {
      "component_settings": []
    }
  ],
  "configurations": [
    {
      "core-site": {
        "fs.trash.interval": "4320"
      }
    },
    {
      "hdfs-site": {
        "dfs.namenode.safemode.threshold-pct": "0.99"
      }
    },
    {
      "hive-site": {
        "hive.exec.compress.output": "true",
        "hive.merge.mapfiles": "true",
        "hive.server2.tez.initialize.default.sessions": "true",
        "hive.server2.transport.mode": "http"
      }
    },
    {
      "mapred-site": {
        "mapreduce.job.reduce.slowstart.completedmaps": "0.7",
        "mapreduce.map.output.compress": "true",
        "mapreduce.output.fileoutputformat.compress": "true"
      }
    },
    {
      "yarn-site": {
        "yarn.acl.enable": "true"
      }
    }
  ],
  "host_groups": [
    {
      "name": "master",
      "configurations": [],
      "components": [
        {
          "name": "APP_TIMELINE_SERVER"
        },
        {
          "name": "HDFS_CLIENT"
        },
        {
          "name": "HISTORYSERVER"
        },
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "HIVE_METASTORE"
        },
        {
          "name": "HIVE_SERVER"
        },
        {
          "name": "JOURNALNODE"
        },
        {
          "name": "MAPREDUCE2_CLIENT"
        },
        {
          "name": "METRICS_COLLECTOR"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NAMENODE"
        },
        {
          "name": "RESOURCEMANAGER"
        },
        {
          "name": "SECONDARY_NAMENODE"
        },
        {
          "name": "LIVY2_SERVER"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "SPARK2_JOBHISTORYSERVER"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "YARN_CLIENT"
        },
        {
          "name": "ZEPPELIN_MASTER"
        },
        {
          "name": "ZOOKEEPER_CLIENT"
        },
        {
          "name": "ZOOKEEPER_SERVER"
        }
      ],
      "cardinality": "1"
    },
    {
      "name": "worker",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "DATANODE"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    },
    {
      "name": "compute",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    }
  ]
}&lt;/PRE&gt;&lt;P&gt;&lt;STRONG&gt;Step3: You now should be able to see the newly added HDP3.0 blueprint, and create a cluster off it&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="83407-step3.png" style="width: 1101px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/17679i74CD846D8D4DFCC1/image-size/medium?v=v2&amp;amp;px=400" role="button" title="83407-step3.png" alt="83407-step3.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="83410-hdp3.png" style="width: 1916px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/17680i838303CF6BB32A92/image-size/medium?v=v2&amp;amp;px=400" role="button" title="83410-hdp3.png" alt="83410-hdp3.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Hope it helps!&lt;/P&gt;&lt;P&gt;If you found it resolved the issue, please "accept" the answer, thanks.&lt;/P&gt;</description>
      <pubDate>Sun, 18 Aug 2019 05:58:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199122#M161165</guid>
      <dc:creator>dsun</dc:creator>
      <dc:date>2019-08-18T05:58:58Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199123#M161166</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/89156/paulnorris.html" nodeid="89156"&gt;@Paul Norris&lt;/A&gt; It is possible to use Cloudbreak 2.7.1 to deploy HDP 3.0; However, Cloudbreak 2.7.1 does not include any default blueprints for HDP 3.0 and so if you want to use HDP 3.0 you must first:&lt;/P&gt;&lt;P&gt;1) create an HDP 3.0 blueprint. Here is an example:&lt;/P&gt;&lt;PRE&gt;{
  "Blueprints": {
    "blueprint_name": "hdp30-data-science-spark2-v4",
    "stack_name": "HDP",
    "stack_version": "3.0"
  },
  "settings": [
    {
      "recovery_settings": []
    },
    {
      "service_settings": [
        {
          "name": "HIVE",
          "credential_store_enabled": "false"
        }
      ]
    },
    {
      "component_settings": []
    }
  ],
  "configurations": [
    {
      "core-site": {
        "fs.trash.interval": "4320"
      }
    },
    {
      "hdfs-site": {
        "dfs.namenode.safemode.threshold-pct": "0.99"
      }
    },
    {
      "hive-site": {
        "hive.exec.compress.output": "true",
        "hive.merge.mapfiles": "true",
        "hive.server2.tez.initialize.default.sessions": "true",
        "hive.server2.transport.mode": "http"
      }
    },
    {
      "mapred-site": {
        "mapreduce.job.reduce.slowstart.completedmaps": "0.7",
        "mapreduce.map.output.compress": "true",
        "mapreduce.output.fileoutputformat.compress": "true"
      }
    },
    {
      "yarn-site": {
        "yarn.acl.enable": "true"
      }
    }
  ],
  "host_groups": [
    {
      "name": "master",
      "configurations": [],
      "components": [
        {
          "name": "APP_TIMELINE_SERVER"
        },
        {
          "name": "HDFS_CLIENT"
        },
        {
          "name": "HISTORYSERVER"
        },
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "HIVE_METASTORE"
        },
        {
          "name": "HIVE_SERVER"
        },
        {
          "name": "JOURNALNODE"
        },
        {
          "name": "MAPREDUCE2_CLIENT"
        },
        {
          "name": "METRICS_COLLECTOR"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NAMENODE"
        },
        {
          "name": "RESOURCEMANAGER"
        },
        {
          "name": "SECONDARY_NAMENODE"
        },
        {
          "name": "LIVY2_SERVER"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "SPARK2_JOBHISTORYSERVER"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "YARN_CLIENT"
        },
        {
          "name": "ZEPPELIN_MASTER"
        },
        {
          "name": "ZOOKEEPER_CLIENT"
        },
        {
          "name": "ZOOKEEPER_SERVER"
        }
      ],
      "cardinality": "1"
    },
    {
      "name": "worker",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "DATANODE"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    },
    {
      "name": "compute",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK2_CLIENT"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    }
  ]
}&lt;/PRE&gt;&lt;P&gt;2) Upload the blueprint to Cloudbreak (you can paste it under the &lt;STRONG&gt;Blueprints&lt;/STRONG&gt; menu item). &lt;/P&gt;&lt;P&gt;3) When creating a cluster:&lt;/P&gt;&lt;P&gt;- Under &lt;STRONG&gt;General Configuration&lt;/STRONG&gt;, select &lt;STRONG&gt;Platform Version&lt;/STRONG&gt; &amp;gt; HDP-3.0 and then your blueprint should appear under &lt;STRONG&gt;Cluster Type &lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;- Under &lt;STRONG&gt;Image Settings&lt;/STRONG&gt;, specify Ambari 2.7 and HDP 3.0 public repos (you can find them in Ambari 2.7 docs: &lt;A href="https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-installation/content/ch_obtaining-public-repos.html)" target="_blank"&gt;https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/bk_ambari-installation/content/ch_obtaining-public-repos.html)&lt;/A&gt;.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Jul 2018 01:31:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199123#M161166</guid>
      <dc:creator>Dominika</dc:creator>
      <dc:date>2018-07-26T01:31:30Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199124#M161167</link>
      <description>&lt;P&gt;Thanks, Dominika, I used your Blueprint and it didn't seem to need me to add the repo's, it had the correct information there automatically.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Jul 2018 22:38:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199124#M161167</guid>
      <dc:creator>paul_norris</dc:creator>
      <dc:date>2018-07-26T22:38:39Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199125#M161168</link>
      <description>&lt;P&gt;Thanks dsun, that has worked for me. Note I used Dominika's blueprint below but don't know if it makes any difference.&lt;/P&gt;</description>
      <pubDate>Thu, 26 Jul 2018 22:39:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199125#M161168</guid>
      <dc:creator>paul_norris</dc:creator>
      <dc:date>2018-07-26T22:39:27Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199126#M161169</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/89156/paulnorris.html" nodeid="89156"&gt;@Paul Norris&lt;/A&gt;Glad it helped.  If you look at the JSON contents, both Dominika &amp;amp; I are referring to the same blueprint.</description>
      <pubDate>Fri, 27 Jul 2018 00:59:07 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199126#M161169</guid>
      <dc:creator>dsun</dc:creator>
      <dc:date>2018-07-27T00:59:07Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199127#M161170</link>
      <description>&lt;P&gt;Great! I wasn't sure if Cloudbreak would do it correctly. &lt;/P&gt;</description>
      <pubDate>Fri, 27 Jul 2018 04:29:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199127#M161170</guid>
      <dc:creator>Dominika</dc:creator>
      <dc:date>2018-07-27T04:29:29Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199128#M161171</link>
      <description>&lt;P&gt;Hello Dominika, The blueprint in the example given was for Data Science: &lt;STRONG&gt;Apache Spark 2, Apache Zeppelin.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt; Do you have a sample blueprint for HDP 3.0 - &lt;STRONG&gt;EDW-ETL: Apache Hive, Apache Spark 2&lt;/STRONG&gt; which I can run on Cloudbreak 2.7.x?&lt;/P&gt;</description>
      <pubDate>Thu, 17 Jan 2019 22:15:18 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199128#M161171</guid>
      <dc:creator>pushpak_nandi</dc:creator>
      <dc:date>2019-01-17T22:15:18Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199129#M161172</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/102443/pushpaknandi.html" nodeid="102443"&gt;@Pushpak Nandi&lt;/A&gt; I do not have any EDW-ETL blueprint for HDP 3.1. Last time I heard the plan was to only ship EDW-Analytics with HDP 3.1.</description>
      <pubDate>Fri, 18 Jan 2019 02:25:08 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199129#M161172</guid>
      <dc:creator>Dominika</dc:creator>
      <dc:date>2019-01-18T02:25:08Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199130#M161173</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/10146/dbialek.html" nodeid="10146"&gt;@Dominika Bialek:Thanks for your quick response.&lt;BR /&gt;&lt;/A&gt;</description>
      <pubDate>Fri, 18 Jan 2019 13:18:50 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199130#M161173</guid>
      <dc:creator>pushpak_nandi</dc:creator>
      <dc:date>2019-01-18T13:18:50Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199131#M161174</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/10146/dbialek.html" nodeid="10146"&gt;@Dominika Bialek&lt;/A&gt; : Hello Dominika, another interesting thing I found out - if I change the stack version to 3.1 in your HDP3.0 blueprint to try to create HDP3.1 cluster, it fails with the following error (but runs well with 3.0) - &lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Couldn't determine Ambari repo for the stack: &amp;lt;blueprint-name&amp;gt;&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Change made:&lt;/P&gt;&lt;OL&gt;&lt;/OL&gt;&lt;PRE&gt;&lt;/PRE&gt;&lt;LI&gt;"Blueprints":{&lt;/LI&gt;&lt;LI&gt;"blueprint_name":"hdp31-data-science-spark2-v4",&lt;/LI&gt;&lt;LI&gt;"stack_name":"HDP",&lt;/LI&gt;&lt;LI&gt;"stack_version":"3.1"&lt;/LI&gt;&lt;LI&gt;},&lt;/LI&gt;&lt;P&gt;So, does it mean the latest Cloudbreak version (2.7.x) can support HDP3.0 but not HDP3.1?&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Another follow up question:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;When you said "the plan was to only ship EDW-Analytics with HDP 3.x", does it mean the customization of blueprint will not be possible in the current Cloudbreak version to include other components that come with EDW-ETL? &lt;/P&gt;&lt;P&gt;Please advise.&lt;/P&gt;</description>
      <pubDate>Sat, 02 Feb 2019 00:26:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199131#M161174</guid>
      <dc:creator>pushpak_nandi</dc:creator>
      <dc:date>2019-02-02T00:26:10Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199132#M161175</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/102443/pushpaknandi.html" nodeid="102443"&gt;@Pushpak Nandi&lt;/A&gt; &lt;/P&gt;&lt;P&gt;Cloudbreak 2.7.2 or earlier does not fully support HDP 3.x. That's why no default  HD 3.x blueprints were included. This doesn't mean that it is impossible to create some HDP 3.x cluster; it just means that there was no sufficient testing completed and/or that no changes were made in Cloudbreak/Ambari for Cloudbreak to support it. A future Cloudbreak release will support some HDP 3.x release(s). &lt;/P&gt;&lt;P&gt;Regarding the second question, what I meant to say is that there is always a limited number of blueprints provided by default; You can always create your own. If we do not ship one for EDW-ETL then you can prepare one by yourself snd upload it.&lt;/P&gt;&lt;P&gt;Hope this helps!&lt;/P&gt;</description>
      <pubDate>Sat, 02 Feb 2019 03:03:51 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199132#M161175</guid>
      <dc:creator>Dominika</dc:creator>
      <dc:date>2019-02-02T03:03:51Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199133#M161176</link>
      <description>&lt;P&gt;Thanks &lt;A rel="user" href="https://community.cloudera.com/users/10146/dbialek.html" nodeid="10146"&gt;@Dominika Bialek&lt;/A&gt; &lt;/P&gt;</description>
      <pubDate>Sun, 03 Feb 2019 04:23:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199133#M161176</guid>
      <dc:creator>pushpak_nandi</dc:creator>
      <dc:date>2019-02-03T04:23:04Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199134#M161177</link>
      <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/10146/dbialek.html" nodeid="10146"&gt;@Dominika Bialek&lt;/A&gt;, few queries:&lt;/P&gt;&lt;P&gt;1. Can you please supply a custom blueprint for &lt;/P&gt;&lt;P&gt;Data Science: &lt;STRONG&gt;Apache Spark 2, Apache Zeppelin/&lt;/STRONG&gt;EDW-Analytics with HDP 3.1?&lt;/P&gt;&lt;P&gt;2. Also, if I use the custom blueprint (Data Science: &lt;STRONG&gt;Apache Spark 2, Apache Zeppelin with &lt;/STRONG&gt;HDP3.0) supplied by you in this thread and try to create a 3 node cluster, there is a timeout happening due to which cluster creation is failing every time (this is not happening with HDP2.6 default blueprint).&lt;/P&gt;&lt;BR /&gt;&lt;UL&gt;&lt;LI&gt;Operation timed out. Failed to find all '3' Ambari hosts. Stack: '34'
2/6/2019, 4:24:20 PM
&lt;/LI&gt;&lt;LI&gt;Building Ambari cluster; Ambari ip:172.31.90.36
2/6/2019, 4:04:06 PM
&lt;/LI&gt;&lt;LI&gt;Starting Ambari cluster services
2/6/2019, 4:02:12 PM
&lt;/LI&gt;&lt;LI&gt;Setting up infrastructure metadata
2/6/2019, 4:02:11 PM
&lt;/LI&gt;&lt;LI&gt;Bootstrapping infrastructure cluster
2/6/2019, 4:01:45 PM
&lt;/LI&gt;&lt;LI&gt;Infrastructure successfully provisioned
2/6/2019, 4:01:45 PM
&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;Snapshot of the error from the log:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;cloudbreak_1  | 2019-02-06 13:00:05,631 [reactorDispatcher-42] pollWithTimeout:56 INFO  c.s.c.s.PollingService - [owner:d96fcce1-a991-4cf7-aa0b-6d186dc764bc] [type:CLUSTER] [id:34] [name:dip-test-cluster-hdp3x] [flow:84633730-dade-402b-b06b-8adf59f989a3] [tracking:53e509cd-6c8e-4c78-8d9f-2e79f6ec951e] Poller timeout.&lt;BR /&gt;cloudbreak_1  | 2019-02-06 13:00:05,632 [reactorDispatcher-42] buildCluster:182 ERROR c.s.c.s.c.a.AmbariClusterSetupService - [owner:d96fcce1-a991-4cf7-aa0b-6d186dc764bc] [type:CLUSTER] [id:34] [name:dip-test-cluster-hdp3x] [flow:84633730-dade-402b-b06b-8adf59f989a3] [tracking:53e509cd-6c8e-4c78-8d9f-2e79f6ec951e] Error while building the Ambari cluster. Message Operation timed out. Failed to find all '3' Ambari hosts. Stack: '34', throwable: {}&lt;BR /&gt;cloudbreak_1  | com.sequenceiq.cloudbreak.service.cluster.ambari.AmbariHostsUnavailableException: Operation timed out. Failed to find all '3' Ambari hosts. Stack: '34'&lt;/P&gt;&lt;P&gt;Can you please advise?&lt;/P&gt;</description>
      <pubDate>Wed, 06 Feb 2019 22:24:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199134#M161177</guid>
      <dc:creator>pushpak_nandi</dc:creator>
      <dc:date>2019-02-06T22:24:10Z</dc:date>
    </item>
    <item>
      <title>Re: HDP 3.0 Cloudbreak Deployment possible?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199135#M161178</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/102443/pushpaknandi.html" nodeid="102443"&gt;@Pushpak Nand&lt;/A&gt;&lt;/P&gt;&lt;P&gt; Perhaps you want to try Cloudbreak 2.9 if launching HDP 3.1 is important to you:&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.hortonworks.com/articles/239903/introducing-cloudbreak-290-ga.html" target="_blank"&gt;https://community.hortonworks.com/articles/239903/introducing-cloudbreak-290-ga.html&lt;/A&gt; &lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/index.html"&gt;https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/index.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;You can update to it if you are currently on an earlier release. It does come with default HDP 3.1 blueprints.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Feb 2019 07:24:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/HDP-3-0-Cloudbreak-Deployment-possible/m-p/199135#M161178</guid>
      <dc:creator>Dominika</dc:creator>
      <dc:date>2019-02-07T07:24:17Z</dc:date>
    </item>
  </channel>
</rss>

