Member since 
    
	
		
		
		01-25-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                396
            
            
                Posts
            
        
                28
            
            
                Kudos Received
            
        
                11
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1385 | 10-19-2023 04:36 PM | |
| 5129 | 12-08-2018 06:56 PM | |
| 6753 | 10-05-2018 06:28 AM | |
| 23311 | 04-19-2018 02:27 AM | |
| 23333 | 04-18-2018 09:40 AM | 
			
    
	
		
		
		05-29-2019
	
		
		05:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							so you installed the Cloudera manager agent manually and then the parcels  distributed automatically?    Did you face any issue?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-29-2019
	
		
		04:38 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							How i will choose which parcels to distribute to each node? or it will try  to distribute both parcels on the node and one of them will fail?  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-29-2019
	
		
		04:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @chriswalton007 Hi Chris, i'm unbale to reach the limk, do you mind share the exact steps you managed to overcome this?     Thanks in advance.     @RajeshBodolla What about the CDH parcels, for other extra custom parcels like anaconda and airflow, i will face the same issue.     what about the NameNode and the Mysql? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-26-2019
	
		
		08:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @seleoni Can you go to HDFS configuration and check Java Heap Size of NameNode in Bytes     if it around 1 GB, try to pop it up, restart the namenode and check if it solve your issue. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-22-2019
	
		
		08:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Issue resolved so i'm adding how we solved it for reference:     We have a VIP over Hadoop name nodes (standby and active) that has a keepalive check that refers all calls to the active node.  The pool uses a monitor called Hadoop_Namenode_monitor_50070.  the monitor sends the following get request :  GET /jmx?qry=Hadoop:service=NameNode,name=FSNamesystem HTTP/1.0\r\n\n  and looks for the string: active - to determine which is the active node.  In CDH 5.16.1 the output of the JMX above has changed, it has a parameter called "NumActiveClients" which causes both active and standby nodes to return the string active.  So to solve this we changed the receive parameter of the monitor in all envs to :  \"active\"  instead of :  active 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-21-2019
	
		
		11:13 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Sending simple rest api also geeting sometimes erros:     curl -i  "http://mha-vip1:50070/webhdfs/v1/fawze?op=LISTSTATUS"     HTTP/1.1 403 Forbidden  Cache-Control: no-cache  Expires: Mon, 21 Jan 2019 19:10:28 GMT  Date: Mon, 21 Jan 2019 19:10:28 GMT  Pragma: no-cache  Expires: Mon, 21 Jan 2019 19:10:28 GMT  Date: Mon, 21 Jan 2019 19:10:28 GMT  Pragma: no-cache  Content-Type: application/json  X-FRAME-OPTIONS: SAMEORIGIN  Transfer-Encoding: chunked  {"RemoteException":{"exception":"StandbyException","javaClassName":"org.apache.hadoop.ipc.StandbyException","message":"Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error"}} 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-21-2019
	
		
		08:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Community.     I'm upgrading my CDH from 5.13.0 to 5.16.1 and i'm using the webhdfs to copy files from hadoop to vertica.     Since i'm using NameNode high availability i create F5 VIP which i use to get the active name node.     For some reasons after the upgrade to 5.16.1 i'm start getting that error that read error in state standby intermittently  , where when i'm checking this by replacing the VIP to nodes it's working on the active namenode only which is expected.     I tried using the the command we are using in the F5 and i'm getting the active name node.     Commands i used:     copying using the VIP:     SOURCE public.Hdfs(url='http://vip:50070/webhdfs/v1     copying using the nodes:    SOURCE public.Hdfs(url='http://node1:50070/webhdfs/v1  SOURCE public.Hdfs(url='http://node2:50070/webhdfs/v1     Checking the node if it active or standby  curl -X GET http://node1:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem HTTP/1.0\r\n\n 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			HDFS
 
			
    
	
		
		
		01-11-2019
	
		
		05:29 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @epowell I cann't be happy more than this, after almost 2 years to get this working 🙂 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-08-2018
	
		
		06:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 http://community.cloudera.com/t5/Cloudera-Manager-Installation/Cloduera-5-15-to-Version-6/m-p/81134#M15238 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-05-2018
	
		
		08:05 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hey,     Once the job failed, the disk space disappear?     Can you check if the disk space occur on the application master nodes?     I assume this is the container logs and you can check this while the job running. 
						
					
					... View more