Member since 
    
	
		
		
		09-15-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                457
            
            
                Posts
            
        
                507
            
            
                Kudos Received
            
        
                90
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 16929 | 11-01-2016 08:16 AM | |
| 12583 | 11-01-2016 07:45 AM | |
| 11729 | 10-25-2016 09:50 AM | |
| 2498 | 10-21-2016 03:50 AM | |
| 5273 | 10-14-2016 03:12 PM | 
			
    
	
		
		
		02-18-2016
	
		
		07:00 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 please see my comment above.   In secure mode you need local user accounts on all Nodemanager nodes 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-18-2016
	
		
		06:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 @Sagar Shimpi @ARUNKUMAR RAMASAMY I agree with @Vikas  Gadade, if you want to execute jobs with your user account, you have to make sure the user is available on every Nodemanager node!   Please see this => "YARN containers in a secure cluster use the operating system facilities to offer execution isolation for containers. Secure containers execute under the credentials of the job user. The operating system enforces access restriction for the container. The container must run as the user that submitted the application."  more info => https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/SecureContainer.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-17-2016
	
		
		06:41 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Make sure you have configured the right heap size as well as validate the following configurations:  
 hbase.rootdir = hdfs://ams......  hbase.cluster.distributed=True  Metrics service operation mode=distributed  hbase.zookeeper.property.clientPort=2181  hbase.zookeeper.quorum=<zookeeper quorum, comma separated without port>  zookeper.znode.parent= /ams-hbase-unsecure  or /hbase-hbase-secure (depending kerberos yes/no)   Restart the metrics collector and make sure a new Znode was created in Zookeeper. Make sure Hbase and the Metrics collector have been started successfully. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-17-2016
	
		
		12:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Thanks !!! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-17-2016
	
		
		06:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		5 Kudos
		
	
				
		
	
		
					
							 
	It is definitely possible to do that, however I would not recommend it, especially in a production environment. These JN processes are just lightweight daemons, so you can place them on the same nodes with other master services. Using one Quorum for multiple clusters increases the risk and chance of affecting the health/stability of all the attached clusters. For example if Cluster A brings down your JN Quorum (for whatever reason), the Namenodes of Cluster B cant synchronize their state and will shutdown eventually because the Quorum is not available =>
 
 
2016-02-16 22:55:55,550 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for required journal (JournalAndStream(mgr=QJM to [XXXXX:8485, XXXXXX:8485, xXXXX:8485], stream=QuorumOutputStream starting at txid 51260))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-15-2016
	
		
		08:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Yeah the script is really a starting point to do ambari audits. It sounds like you need more like an export/import functionality, I have worked on something similar in the past. Or are you looking for a way to export the config deltas from two clusters an compare them?  How would the export of configuration deltas work? Export all adjusted configurations, but ignore configurations that have a hostname, ip or clustername automatically? Or do you just export all delta configurations, select the configuration values you want for the new cluster and import the selected values? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-15-2016
	
		
		07:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Steven Hirsch  The python script is using the following modules:  requests
json
getpass
logging
sys
getopt
  On most of the systems you only have to install getpass and requests.  Requests is not python script, its a complete package that makes it easier to submit API requests, see this page http://docs.python-requests.org/en/master/ (You can install it with "pip install requests")  Let me know if you need any help with the script, I am happy to help and improve the script 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-14-2016
	
		
		06:14 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Great, thanks for sharing! This might also help https://github.com/mr-jstraub/HDFSQuota/blob/master/HDFSQuota.ipynb 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-13-2016
	
		
		08:02 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 You dont have to remove and reinstall the ambari metrics service from Ambari, I am pretty sure this will not solve the problem!  Please see my comment above => Since hbase.cluster.distributed is true, could you please change "Metrics service operation mode" to "distributed"  If this is a new installation, you can try to remove all Metrics data:   Stop Ambari Metrics (Collector + all monitors)  Make sure no Metrics process is running (you can kill all processes belonging to user "ams")  Remove data from hdfs (hdfs dfs -rmr hdfs://hdp-m.samitsolutions.com:8020/apps/hbase/data)  Remove data from zookeeper (login: zookeeper-client -server hdp-m.samitsolutions.com:2181; removal: rmr /<hbase znode>)  Start the Ambari Metrics Collector (not the monitors!)  See if the collector starts, if not please upload the hbase-master and ambari-metrics-collector log   Is this a secured (kerberized) or unsecured (no kerberos) cluster?  There are other steps we can try, but lets try the above first.   Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-12-2016
	
		
		05:43 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		4 Kudos
		
	
				
		
	
		
					
							 . @wei yang Are you using Spark 1.3.1 or just the content of the tutorial? ORC support was added in Spark 1.4 (http://hortonworks.com/blog/bringing-orc-support-into-apache-spark/)  Try using the following command  myDataFrame.write.format("orc").save("some_name") 
						
					
					... View more