Member since 
    
	
		
		
		01-07-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                217
            
            
                Posts
            
        
                135
            
            
                Kudos Received
            
        
                18
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3071 | 12-09-2021 09:57 PM | |
| 2456 | 10-15-2018 06:19 PM | |
| 10422 | 10-10-2018 07:03 PM | |
| 5472 | 07-24-2018 06:14 PM | |
| 2005 | 07-06-2018 06:19 PM | 
			
    
	
		
		
		02-08-2019
	
		
		07:10 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I do not the answer to the first question, perhaps someone else can answer. Regarding WASB or ADLS, you can use Cloudbreak to configure access https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/create-cluster-azure/content/cb_cloud-storage-azure-azure.html, not sure about defining it in a blueprint. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-08-2019
	
		
		06:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @heta desai  You can connect ADLS or WASB to your cluster to copy or access data stored there, but this storage should not be used as default file system. I believe that some people use WASB for this purpose, but it is not officially supported by Hortonworks.  The difference between worker and compute is that no data is stored on compute nodes. If you look at one of the default workload cluster blueprint, the difference between these two is the ""name": "DATANODE"" component that is included in worker nodes, but not in compute.      {
      "name": "worker",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK_CLIENT"
        },
        {
          "name": "DATANODE"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    },
    {
      "name": "compute",
      "configurations": [],
      "components": [
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SPARK_CLIENT"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "NODEMANAGER"
        }
      ],
      "cardinality": "1+"
    }  Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-07-2019
	
		
		05:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Updated for Cloudbreak 2.9.0. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-07-2019
	
		
		05:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Updated for Cloudbreak 2.9. A new HDP 3.1 data lake blueprint is available. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-07-2019
	
		
		04:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		5 Kudos
		
	
				
		
	
		
					
							 Cloudbreak 2.9.0 is now available! It is a general availability (GA) release, so - with an exception of some features that are marked as TP - it is suitable for production.   Try it now  Upgrade to 2.9.0  Quickly deploy by using quickstart on AWS, Azure, or Google Cloud  Install manually on AWS, Azure, Google Cloud, or OpenStack  New features  Cloudbreak 2.9.0 introduces the following new features. While some of these features were introduced in Cloudbreak 2.8.0 TP, others are brand new:     Feature  Description  Documentation    Specifying resource group name on Azure  When creating a cluster on Azure, you can specify the name for the new resource group where the cluster will be deployed.   Resource group name    Multiple existing security groups on AWS  When creating a cluster on AWS, you can select multiple existing security groups. This option is available only when an existing VPC is selected.   Create a cluster on AWS    EBS volume encryption on AWS  You can optionally configure encryption for EBS volumes attached to cluster instances running on EC2. Default or customer-managed encryption keys can be used.  EBS encryption on AWS    Shared VPCs on GCP  When creating a cluster on Google Cloud, you can place it in an existing shared VPC.  Shared networks on GCP    GCP volume encryption  By default, Google Compute Engine encrypts data at rest stored on disks. You can optionally configure encryption for the encryption keys used for disk encryption. Customer-supplied (CSEK) or customer-managed (CMEK) encryption keys can be used.  Disk encryption on GCP    Workspaces  Cloudbreak introduces a new authorization model, which allows resource sharing via workspaces. In addition to a default personal workspace, users can create additional shared workspaces.  Workspaces    Operations audit logging  Cloudbreak records an audit trail of the actions performed by Cloudbreak users as well as those performed by the Cloudbreak application.  Operations audit logging    Updating long-running clusters  Cloudbreak supports updating base image's operating system and any third party packages that have been installed, as well as upgrading Ambari, HDP and HDF.  Updating OS and tools on long-running clusters and Updating Ambari and HDP/HDF on long-running clusters     HDP 3.1   Cloudbreak introduces two default HDP 3.1 blueprints and allows you to create your custom HDP 3.1 blueprints.  Default cluster configurations    HDF 3.3  Cloudbreak introduces two default HDF 3.3 blueprints and allows you to create your custom HDP 3.3 blueprints. To get started, refer to How to create a NiFi cluster HCC post.    Default cluster configurations    Recipe parameters  Supported parameters can be specified in recipes as variables by using mustache kind of templating with "{{{ }}}" syntax.  Writing recipes and Recipe parameters     Shebang in Python recipes   Cloudbreak supports using shebang in Python scripts run as recipes.  Writing recipes     Technical preview features  The following features are technical preview (not suitable for production):     Feature  Description  Documentation    AWS GovCloud (TP)  You can install Cloudbreak and create Cloudbreak-managed clusters on AWS GovCloud.  Deploying on AWS GovCloud    Azure ADLS Gen2 (TP)  When creating a cluster on Azure, you can optionally configure access to ADLS Gen2. This feature is technical preview.  Configuring access to ADLS Gen2    New and changed data lake blueprints (TP)  Cloudbreak includes three data lake blueprints, two for HDP 2.6 (HA and Atlas) and one for HDP 3.1. Note that Hive Metastore has been removed from the HDP 3.x data lake blueprints, but setting up an external database allows all clusters attached to a data lake to connect to the same Hive Metastore. To get started with data lakes, refer to How to create a data lake with Cloudbreak 2.9 HCC post.    Working with data lakes       Default blueprints    Cloudbreak 2.9.0 includes the following HDP 2.6, HDP 3.1, and HDF 3.3 workload cluster blueprints. In addition, HDP 3.1 and HDP 2.6 data lake blueprints  are available as technical preview. Note that Hive Metastore has been removed from the HDP 3.x data lake blueprints, but setting up an external database allows all clusters attached to a data lake to connect to the same Hive Metastore.  Documentation links  How to create a data lake with Cloudbreak 2.9 (HCC post)  How to create a NiFi cluster (HCC post)  Cloudbreak 2.9.0 documentation (Official docs)  Release notes (Official docs) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		02-06-2019
	
		
		11:24 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Pushpak Nand   Perhaps you want to try Cloudbreak 2.9 if launching HDP 3.1 is important to you:  https://community.hortonworks.com/articles/239903/introducing-cloudbreak-290-ga.html   https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/index.html  You can update to it if you are currently on an earlier release. It does come with default HDP 3.1 blueprints. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-01-2019
	
		
		07:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Pushpak Nandi   Cloudbreak 2.7.2 or earlier does not fully support HDP 3.x. That's why no default  HD 3.x blueprints were included. This doesn't mean that it is impossible to create some HDP 3.x cluster; it just means that there was no sufficient testing completed and/or that no changes were made in Cloudbreak/Ambari for Cloudbreak to support it. A future Cloudbreak release will support some HDP 3.x release(s).   Regarding the second question, what I meant to say is that there is always a limited number of blueprints provided by default; You can always create your own. If we do not ship one for EDW-ETL then you can prepare one by yourself snd upload it.  Hope this helps! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-17-2019
	
		
		06:25 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Pushpak Nandi I do not have any EDW-ETL blueprint for HDP 3.1. Last time I heard the plan was to only ship EDW-Analytics with HDP 3.1.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-13-2018
	
		
		10:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 
	For Cloudbreak, these variables that @khorvath mentioned are Java JVM opts that should be configured through CB_JAVA_OPTS variable in your Profile file. You can set these as in the following example:  export CB_JAVA_OPTS="-Dhttp.proxyHost=ec2-52-51-184-121.eu-west-1.compute.amazonaws.com -Dhttp.proxyPort=3128"  
	If you have a cert for SSL then you should place it into the etc folder of you deployment and replace the `path_to_cert` to the relative path of the cert from your deployment’s etc folder 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-15-2018
	
		
		06:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Neeraj Gupta Please make sure that your Cloudbreak policy attached to the role or user (depending on which credential type you're using) has all of the following permissions: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.8.0/create-credential-aws/content/cb_create-credentialrole.html   If you created your role for an earlier version of Cloudbreak, you may need to update it, because there are additional permissions that are required in 2.8.0. 
						
					
					... View more