Member since 
    
	
		
		
		03-28-2016
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                194
            
            
                Posts
            
        
                18
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		11-15-2023
	
		
		09:55 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hola a todos,  Actualice los paquetes de krb5-* a su ultima versión y se solucionó el problema.  saludos. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-09-2023
	
		
		07:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @jagan20, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-24-2019
	
		
		02:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Suresh,     There is no command but you can easily  find  the information on the HBase Web UI.     http://host:16010/master-status#baseStats     Best,  Helmi KHALIFA        
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-21-2017
	
		
		05:42 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@suresh krish As the referred article too suggests, if you were to attempt recovering accidentally deleted files in Production cluster's HDFS, then it is recommended to immediately stop all the DataNodes in the cluster and seek support's help to go through the process.  When it is about recovering production data, it is very important that one has very clear understanding of the recovery procedure, knows all the precautions and checks to be taken care of and is confident on how to proceed if any of the steps fail. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-15-2017
	
		
		05:41 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @suresh krish  What are you trying to use? Keystore on HDFS or in local ?  You can read in the doc the following :  The JavaKeyStoreProvider, which is represented by the provider URI jceks://file|hdfs/path-to-keystore, is used to retrieve credentials from a Java keystore. The underlying use of the Hadoop filesystem abstraction allows credentials to be stored on the local filesystem or within HDFS.  and   The LocalJavaKeyStoreProvider, which is represented by the provider URI localjceks://file/path-to-keystore, is used to access credentials from a Java keystore that is must be stored on the local filesystem.  You are using localjceks. So your URI should be localjceks://file/path-to-your-jceks. The file keyword is important. Also, the /user/hdfs in this case is a local so it should exist in your OS. If you want to use HDFS then you need jceks and URI jceks://hdfs/path-to-your-file 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-22-2017
	
		
		07:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @suresh krish
  When you see the environmental variables in your spark UI you can see that particular job will be using below property serialization. If you can't see in cluster configuration, that mean user is invoking at the runtime of the job.  <code>spark.serializer        org.apache.spark.serializer.KryoSerializer 
 Secondly spark.kryoserializer.buffer.max is built inside that with default value 64m. If required you can increase that value at the runtime. Even we can all the KryoSerialization values at the cluster level but that's not good practice without knowing proper use case.  Hope this helps you. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-07-2017
	
		
		07:23 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @suresh krish,  The error is because there is no attribute by name 'policyName'. Moreover, the policies are exported / imported at service repository level, not at an individual policy level. For your reference, I'm attaching two exported service repo json - HDFS and Hive.      Hope this helps ! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-02-2017
	
		
		06:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  curl -i -uadmin:admin http://192.168.218.183:8080/api/v1/bootstrap/1
HTTP/1.1 200 OK
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Set-Cookie: AMBARISESSIONID=h08834nky0l13v3s3e939xlz;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: application/json
Vary: Accept-Encoding, User-Agent
Content-Length: 108
Server: Jetty(8.1.19.v20160209)
{"status":"RUNNING","hostsStatus":[{"hostName":"dn3.discovercts.com","status":"RUNNING","log":""}],"log":""}[root@ambari 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-26-2017
	
		
		06:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @suresh krish   The issue seems to be with user profile, please compare the profile working Vs non-working. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-11-2017
	
		
		02:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thanks for your response, I have a grafana crt file already created. do i need to import it again? Can you tell me how to import it  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













