Member since 
    
	
		
		
		10-14-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                65
            
            
                Posts
            
        
                57
            
            
                Kudos Received
            
        
                20
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 10020 | 04-20-2018 10:07 AM | |
| 3501 | 09-20-2017 12:31 PM | |
| 3104 | 05-04-2017 01:11 PM | |
| 1773 | 02-14-2017 07:36 AM | |
| 6995 | 02-03-2017 05:52 PM | 
			
    
	
		
		
		01-16-2017
	
		
		08:57 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 If you would like to install Cloudbreak on an existing VM you can do it as described here: http://sequenceiq.com/cloudbreak-docs/latest/onprem/  If I understand you correctly, you would like to have a 4 node cluster installed by Cloudbreak, and you would like to have that the cloudbreak is running on the edge node of this cluster. If this is the case, then this is a chicken and egg problem, you cannot really have a cluster created by Cloudbreak and install Cloudbreak only later on when the cluster is ready.  But if you would like to have just an extra Cloudbreak installed on an existing Edge node, then you can certainly do it, although I would not consider this case as best practice.  Attila 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-09-2017
	
		
		07:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Even if you have installed your cluster with Cloudbreak, you can upgrade the HDP through Ambari. There are two methods for upgrading HDP with Ambari: Rolling Upgrade and Express Upgrade.  
 A Rolling Upgrade orchestrates the HDP upgrade in an order that is meant to preserve cluster operation and minimize service impact during upgrade. This process has more stringent prerequisites (particularly regarding cluster high availability configuration) and can take longer to complete than an Express Upgrade.  An Express Upgrade orchestrates the HDP upgrade in an order that will incur cluster downtime but with less stringent prerequisites.   Most probably you have a non-HA cluster, therefore execute the steps described under Express upgrade: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-upgrade/content/upgrading_hdp_stack.html  Attila 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-06-2017
	
		
		07:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Edge node is nothing more just a properly configured hostgroup with e.g. client libraries and/or with perimeter security components like Knox. What is in the hostgroup is transparent for Cloudbreak, I have copied an example here:  ...
 {
      "name": "host_group_edge",
      "configurations": [],
      "components": [
        {
          "name": "ZOOKEEPER_CLIENT"
        },
        {
          "name": "PIG"
        },
        {
          "name": "OOZIE_CLIENT"
        },
        {
          "name": "HBASE_CLIENT"
        },
        {
          "name": "HCAT"
        },
        {
          "name": "KNOX_GATEWAY"
        },
        {
          "name": "METRICS_MONITOR"
        },
        {
          "name": "FALCON_CLIENT"
        },
        {
          "name": "TEZ_CLIENT"
        },
        {
          "name": "SLIDER"
        },
        {
          "name": "SQOOP"
        },
        {
          "name": "HDFS_CLIENT"
        },
        {
          "name": "HIVE_CLIENT"
        },
        {
          "name": "YARN_CLIENT"
        },
        {
          "name": "METRICS_COLLECTOR"
        },
        {
          "name": "MAPREDUCE2_CLIENT"
        }
      ],
      "cardinality": "1"
    }
...  If you would like you create your own blueprint, then probably the easiest way is to use a default blueprint delivered together with Cloudbreak (e.g. hdp-small-default blueprint) and delete those hostgroups and components what you don't need. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-04-2017
	
		
		10:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		11 Kudos
		
	
				
		
	
		
					
							 Using SaltStack to run commands on HDCloud and Cloudbreak  HDCloud and Cloudbreak make it easy to provision, configure and elastically grow HDP clusters on cloud infrastructure. During provisioning time Cloudbreak can execute recipes on the nodes participating in the cluster. Recipes are simple script written in bash or python or any other scripting language available on the nodes.  It is common that users would like to execute ad-hoc commands (e.g. collecting logs, installing extra packages, executing scripts) on nodes not only during provisioning time, but when the cluster is used.    Infrastructure management of Cloudbreak  Under the hood Cloudbreak uses SaltStack to manage nodes of the cluster, install packages, change configuration files and execute recipes. After provisioning phase, users can take advantage of this infrastructure management tool and execute their own scripts and create their own Salt States.   Connectivity check  By default Salt master is installed on one of the master nodes, more specifically on the same node where Ambari server is available. In order to run simple salt connectivity command, just ssh to the Ambari server machine and execute the following command:  sudo salt '*' test.ping  Remote command execution  Running commands on remote systems is the core function of Salt, it can execute arbitrary commands across your cluster completely parallel. Execute a command on all nodes of your cluster:  sudo salt '*' cmd.run 'echo hello'  Targeting commands  If you would like to execute commands only on specific nodes the you can use the targeting mechanism of Salt. E.g execute the first command on master node(s) and the 2nd command on worker node(s):  sudo salt -G 'hostgroup:master' cmd.run 'yarn application -list'
sudo salt -G 'hostgroup:worker' cmd.run 'free -h'  Targeting is very flexible in Salt, you can read about this in the Salt documentation. Probably one of the most common targeting option is to use Salt Grains. There are additional Salt Grains defined by Cloudbreak beyond the standard Grains supported by SaltStack such as hostgroup and roles. You can list all supported grains with:  sudo salt '*' grains.items  Creating your own Salt State  If you would like to create more complex things than executing a simple command you can create your own Salt State. E.g you can create a state which installs multiple packages, by saving the following file under /srv/salt/nettools/init.sls:  install_network_packages:
  pkg.installed:
    - pkgs:
      - rsync
      - lftp
      - curl  You can execute this new state on every node with:  salt '*' state.apply nettools  There is a much more elegant way to include your pre-written Salt States, for that you can take a look at Salt Formulas. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-29-2016
	
		
		07:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Sorry, but I do not have such comparison.  Attila 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-25-2016
	
		
		09:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 Hi @Obaid Salikeen,  Pros:   Multiple cloud provider support (ypu can deploy clusters using the same interface to different providers)  You can use it even on private cloud e.g OpenStack  Cloudbreak and HDP is open source  Cloudbreak installs Ambari, what you can use to monitor or customise your cluster after deployment (e.g. add new services)  It comes with fully configured SaltStack what you can use to manage your VMs e.g apply security patches  More flexible since you can create your own Blueprint which can contains only those services what you need  Cloudbreak supports autoscaling based on metrics gathered from Ambari (e.g some of those metrics are very general e.g. disk space others are Hadoop specific e.g. pending YARN containers)   Cons:   need one more instance where Cloudbreak is running (of course one Cloudbreak can manage multiple clusters)  Cloudbreak is a cluster management tool and you cannot submit jobs through it. Something like steps in EMR is not supported    Disclaimer: I am an engineer working on Cloudbreak  Attila 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-25-2016
	
		
		10:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @Obaid Salikeen,  HDC does not support HDF (Nifi). It supports HDP only with pre-defined blueprints. There is no short term plan to support HDF.  Attila 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-24-2016
	
		
		09:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Vadim Vaks, @Miguel Lucero it has just been fixed in 1.6.1-rc.27. You can update to this version with   cbd update rc-1.6
cbd regenerate
cbd restart   Could you try it out, please? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-21-2016
	
		
		11:01 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I think I have found the issue. The debug: True is set in /etc/salt/master.d/custom.conf  ....
rest_cherrypy:
  ....
  port: 3080
  ....
  debug: True
  Because of this flag the salt-api prints out more useful info for debugging, but it has the side effect that it will reload itself when the underlying code is changed: https://docs.saltstack.com/en/latest/ref/netapi/all/salt.netapi.rest_cherrypy.html  I saw the "Restarting because ..." messages in the Salt master log while the Ambari was installing the packages for HDP (e.g. in my case the email/mime/__init__.py was touched and triggered the restart, which was not handled very gracefully by cherrypy and caused the port collision.  2016-10-21 21:25:30,016 [cherrypy.error                           ][INFO    ][3454] [21/Oct/2016:21:25:30] ENGINE Restarting because /usr/lib64/python2.6/email/mime/__init__.py changed.
2016-10-21 21:25:30,016 [cherrypy.error                           ][INFO    ][3454] [21/Oct/2016:21:25:30] ENGINE Stopped thread 'Autoreloader'.
2016-10-21 21:25:30,017 [cherrypy.error                           ][INFO    ][3454] [21/Oct/2016:21:25:30] ENGINE Bus STOPPING
2016-10-21 21:25:30,027 [cherrypy.error                           ][INFO    ][3454] [21/Oct/2016:21:25:30] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('0.0.0.0', 3080)) shut down
2
  I was thinking about a workaround, but I was not able to figure out any what you can apply. I will try to make a fix for it.  Attila 
						
					
					... View more