Member since 
    
	
		
		
		09-15-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                294
            
            
                Posts
            
        
                764
            
            
                Kudos Received
            
        
                81
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2117 | 07-27-2017 04:12 PM | |
| 5415 | 06-29-2017 10:50 PM | |
| 2593 | 06-21-2017 06:29 PM | |
| 3148 | 06-20-2017 06:22 PM | |
| 2758 | 06-16-2017 06:46 PM | 
			
    
	
		
		
		04-02-2017
	
		
		08:17 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Login Ambari WI with admin/admin console. do the service check.  1) Hive :- Click on Hive service and do the Run Service Check .
  You will get exact error, what is issue and trouble shoot the Issue.  2) Smart Sense you can ignore .
  3) Amabri-Metrics :-  Click on Ambar-Metricks do the Run Service Check
 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-09-2018
	
		
		06:06 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 IBM offers free courses in Scala and other languages, they are free. There are tests at the end of the course once successful you can earn badges and showcase them.   https://cognitiveclass.ai/ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-05-2017
	
		
		01:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I'd also post this question on the Ambari track to check why Ambari didn't detect the DataNodes doing down.  Also from your logs it is hard to say why the DataNode went down. I again recommend increasing the DataNode heap allocation via Ambari. Also check that your nodes are provisioned with sufficient amount of RAM. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-22-2017
	
		
		05:53 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		8 Kudos
		
	
				
		
	
		
					
							 I can open the link just fine. Please see attached screenshot      http://hortonworks.com/wp-content/uploads/2015/08/DataSheet_HDPCD_Java_2.2.pdf  Make sure your dont have any connection issues. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-24-2017
	
		
		11:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Yes, I'm afraid that fast upload can overload the buffers in Hadoop 2.5, as it uses JVM heap to store blocks while it uploads them. The bigger the mismatch between the data generated (i.e. how fast things can be read) and the upload bandwidth, the more heap you need. On a long-haul upload you usually have limited bandwidth, and the more distcp workers, the more the bandwidth is divided between them, the bigger the mismatch a  In Hadoop 2.5 you can get away with tuning the fast uploader to use less heap. It's tricky enough to configure that in the HDP 2.5 docs we chose not to mention the fs.s3a.fast.upload option entirely. It was just too confusing and we couldn't come up with some good defaults which would work reliably. Which is why I rewrote it completely for HDP 2.6. The HDP 2.6/Apache Hadoop 2.8 (and already in HDCloud) block output stream can buffer on disk (default), or via byte buffers, as well as heap, and tries to do better queueing of writes.  For HDP 2.5. the tuning options are measured in the Hadoop 2.7 docs, Essentially a lower value of fs.s3a.threads.core and fs.s3a.threads.max keeps the number of buffered blocks down, while changing the size of fs.s3a.multipart.size to something like 10485760 (10 MB) and setting fs.s3a.multipart.threshold to the same value reduces the buffer size before the uploads begin.   Like I warned, you can end up spending time tuning, because the heap consumed increases with the threads.max value, and decreases on the multipart threshold and size values. And over a remote connection, the more workers you have in the distcp operation (controlled by the -m option), the less bandwidth each one gets, so again: more heap overflows.   And you will invariably find out on the big uploads that there are limits.  As a result In HDP-2.5, I'd recommend avoiding the fast upload except in the special case of: you have a very high speed connection to an S3 server in the same infrastructure, and use it for code generating data, rather than big distcp operations, which can read data as fast as it can be streamed off multiple disks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-28-2017
	
		
		02:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 EventTime timezone fix is available in Ranger 0.7.0. https://issues.apache.org/jira/browse/RANGER-1249 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-18-2017
	
		
		12:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Param NC -  There is no way to close a question.  Once, you have found a suitable answer to a question, you can Accept the answer. However, there is an option to Unfollow the question (see screenshot), resulting in not receiving any further communication from that question.      Hope this helps.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-20-2017
	
		
		10:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks Namit, this worked for me on my Dev Environment. Will try on next Change on Prod too. Thanks. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













