Member since 
    
	
		
		
		08-08-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                1652
            
            
                Posts
            
        
                30
            
            
                Kudos Received
            
        
                11
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1910 | 06-15-2020 05:23 AM | |
| 15411 | 01-30-2020 08:04 PM | |
| 2044 | 07-07-2019 09:06 PM | |
| 8090 | 01-27-2018 10:17 PM | |
| 4554 | 12-31-2017 10:12 PM | 
			
    
	
		
		
		10-14-2020
	
		
		08:44 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 actually what I want to do is to use API to install HBASE , with all hbase component  so when you wrote the OOZIE procedure , what is the API to add HBASE insted OOZIE?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-14-2020
	
		
		08:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have already the files     ls /etc/hbase/conf/  hadoop-metrics2-hbase.properties hbase-env.cmd hbase-env.sh hbase-policy.xml hbase-site.xml log4j.properties regionservers 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-14-2020
	
		
		08:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I have another issue that is related but it happens when I install component from amabri      from host I do --> Add --> install hbase master     but I get the following      Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
    BeforeInstallHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 28, in hook
    import params
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/params.py", line 110, in <module>
    hbase_user_dirs = format("/home/{hbase_user},/tmp/{hbase_user},/usr/bin/{hbase_user},/var/log/{hbase_user},{hbase_tmp_dir}")
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/format.py", line 95, in format
    return ConfigurationFormatter().format(format_string, args, **result)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/format.py", line 59, in format
    result_protected = self.vformat(format_string, args, all_params)
  File "/usr/lib64/python2.7/string.py", line 549, in vformat
    result = self._vformat(format_string, args, kwargs, used_args, 2)
  File "/usr/lib64/python2.7/string.py", line 582, in _vformat
    result.append(self.format_field(obj, format_spec))
  File "/usr/lib64/python2.7/string.py", line 599, in format_field
    return format(value, format_spec)
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
    raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'hbase-env' was not found in configurations dictionary!    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-13-2020
	
		
		02:13 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 we have ambari cluster and HDP version 2.6.5 
   
 we want to add the service OOZIE on specific nodes in our cluster 
   
 for example - we have 10 workers machines in the HDP cluster 
   
 so we want to add he service OOZIE on each worker machine 
   
 from documentation we found the following API that really add the service to ambari 
   
 curl --user admin:admin -H "X-Requested-By: ambari" -i -X POST -d '{"ServiceInfo":{"service_name":"OOZIE"}}' http://localhost:8080/api/v1/clusters/HDP/services 
   
 but we not succeeded to improve the API in order to install the service OOZIE on each worker node 
   
 the final target is to add the service to amabri while each worker node in amabri should have the service OOZIE 
   
 any ideas how to continue from this stage? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		10-12-2020
	
		
		02:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 little question     why just not stop the service - HDFS on each new data node  and set it to maintenance mode ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-09-2020
	
		
		12:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 you said that not need to run it , but the post that I mentioned say to run it , so what is right  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-09-2020
	
		
		12:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hi all     we have HDP 2.6.4 cluster with 245 workers machines     each worker have ( datanode and resource manager )     we want to add 10 new workers machines to the cluster      but we want to disable the datanode machines so no data will transfer from the old datanodes to the new datanodes     I thinking to do maintenance mode on the new datanode , but not sure if this action is enough in order to disable the datanodes machine on the new workers       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Ambari Blueprints
			
    
	
		
		
		09-13-2020
	
		
		09:17 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 hi all           We are performing now the change hostname configuration on production cluster according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html     The last stage is talking about – "in case NameNode HA enabled , then need to run the following command on one of the name node"        hdfs zkfc -formatZK -force        since we have active name node and standby name node we assume that our namenode is HA enable ?            but we want to understand  what are the risks when doing the following cli on one of the namenode     hdfs zkfc -formatZK -force     is the below command is safety to run without risks ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
- 
						
							
		
			Ambari Blueprints
			
    
	
		
		
		09-13-2020
	
		
		09:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 thank you for the post     but another question - according to the document - https://docs.cloudera.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_changing_host_names.html     The last stage is talking about – in case NameNode HA enabled , then need to run the following command on one of the name node        hdfs zkfc -formatZK -force        thank you for the post     but since we have active name node and standby name node we assume that our namenode is HA enable     example from our cluster         but we want to understand  what are the risks when doing the following cli on one of the namenode     hdfs zkfc -formatZK -force     is the below command is safety to run without risks ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-08-2020
	
		
		01:42 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 We have HDP cluster version `2.6.5` and ambari `2.6.1` version 
 Cluster include 3 masters machines , and 211 data-nodes machines ( workers machines ) , all machines are `rhel 7.2` version 
 Example 
   master1.sys77.com , master2.sys77.com , master3.sys77.com … 
 And data nodes machines as 
 worker01.sys77.com , worker02.sys77.com ----> worker211.sys77.com 
 Now we want to change the domain name to `bigdata.com` instead of `sys77.com` 
 What is the procedure to replace the `domain name` (`sys77.com`) for Hadoop cluster ? ( HDP cluster with ambari ) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
 
         
					
				













