Member since 
    
	
		
		
		07-31-2013
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                1924
            
            
                Posts
            
        
                462
            
            
                Kudos Received
            
        
                311
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1965 | 07-09-2019 12:53 AM | |
| 11822 | 06-23-2019 08:37 PM | |
| 9106 | 06-18-2019 11:28 PM | |
| 10066 | 05-23-2019 08:46 PM | |
| 4506 | 05-20-2019 01:14 AM | 
			
    
	
		
		
		01-09-2020
	
		
		01:29 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Can you paste the contents of the all files in the following directory from the Ranger host please?     /var/run/cloudera-scm-agent/process/1546333400-ranger-RANGER_ADMIN-SetupRangerCommand/logs/*     The missing property (db_password) is written by a control script that should log some information to these files and it'll help us determine a cause if we had their contents.     I'm assuming that in your CM - Ranger - Configuration page the value for field 'ranger.jpa.jdbc.password' is set to a valid value.     Also, do you perhaps have an @ (at) character in your password? If yes, could you try a different password without that character? You may be hitting a bug (OPSAPS-53645 is its internal ID, fixed in future releases) that did not support that password character in the original CDP 7.0 release. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2019
	
		
		12:43 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							The original issue described here is not applicable to your version. In  your case it could simply be a misconfiguration that's causing oozie to not  load the right hive configuration required to talk to the hive service. Try  enabling debug logging on the oozie server if you are unable to find an  error in it. Also try to locate files or jars in your workflow that may be  supplying an invalid hive client XML.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-09-2019
	
		
		12:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Yes that is correct, and the motivations/steps-to-use are reflected here too: https://www.cloudera.com/documentation/enterprise/6/latest/topics/cm_s3guard.html     Note: On your point of 'load data from S3 into HDFS', it is better stated as simply 'read data from S3', where HDFS gets used as a transient storage (where/when required). There does not need to be a 'download X GiB data from S3 to HDFS first, only then begin jobs' step, as distributed jobs can read off of S3 via s3a:// URLs in the same way they do from HDFS hdfs://. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-04-2019
	
		
		07:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Try deleting away /etc/default/cloudera-*, /etc/cloudera-*,  /var/lib/cloudera-* entirely, and erase all cloudera-* packages via yum (on  all involved hosts). After this, attempt the installer again. This will  allow the default embedded configs to be written and used for DB  initialization, vs. preserving whatever has been left over.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-23-2019
	
		
		08:37 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							This looks like a case of edit logs getting reordered. As @bgooley noted, it is similar to HDFS-12369, where the OP_CLOSE is appearing after OP_DELETE causing the file to be absent when replaying the edits.    The simplest fix, depending on if this is the only file instance of the reordered issue in your edit logs, would be to run the NameNode manually in an edits-recovery mode and "skip" this edit when it catches the error. The rest of the edits should apply normally and let you start up your NameNode.    The recovery mode of NameNode is detailed at https://blog.cloudera.com/blog/2012/05/namenode-recovery-tools-for-the-hadoop-distributed-file-system/    If you're using CM, you'll need to use the NameNode's most recent generated configuration directory under /var/run/cloudera-scm-agent/process/ on the NameNode host as the HADOOP_CONF_DIR, while logged in as 'hdfs' user, before invoking the manual NameNode startup command.    Once you've followed the prompts and the NameNode appears to start up, quit out/kill it to restart from Cloudera Manager normally.    If you have a Support subscription, I'd recommend filing a case for this, as the process could get more involved depending on how widespread this issue is.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-18-2019
	
		
		11:28 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							It could be passed by either modes, hence the request for the CLI used.    The property to modify on the client configuration (via CM properties or  via -D early CLI args) is called 'mapreduce.map.memory.mb', and the  administrative limit is defined in the Resource Manager daemon  configuration via 'yarn.scheduler.maximum-allocation-mb'  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-18-2019
	
		
		07:36 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Please share your full Sqoop CLI.    The error you are receiving suggests that the configuration passed to this  specific Sqoop job carried a parameter asking for Map memory to be higher  than what the administrator has configured as a limit a Map task may  request. As a result, the container request is rejected. Lowering the  request memory size of map tasks will let it pass through this check.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-05-2019
	
		
		06:24 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Reavidence,    HTTPFS with Kerberos requires SPNEGO authentication to be used. Per https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_sg_httpfs_security.html, for curl (after kinit) this can be done by passing the below two parameters:    """  The '--negotiate' option enables SPNEGO in curl.  The '-u :' option is required but the username is ignored (the principal that has been specified for kinit is used).  """
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-03-2019
	
		
		07:33 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Please follow the entire discussion above - the parameter is an advanced one and has no direct field. You'll need to use the safety valve to apply it by using the property name directly.    P.s. It is better etiquette to open a new topic than bump ancient ones.
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-27-2019
	
		
		06:35 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Small note that's relevant to this (older) topic:     When copying over Cells from one fetched Scan/Get Result to another Put object with the altered key, do not add the Cell objects as-is via Put::addCell(…) API. You'll need to instead copy the value portions exclusively.     A demo program for a single key operation would look like this:        public static void main(String[] args) throws Exception {
    Configuration conf = HBaseConfiguration.create();
    Connection connection = ConnectionFactory.createConnection(conf);
    Table sourceTable = connection.getTable(TableName.valueOf("old_table"));
    Table destinationTable = connection.getTable(TableName.valueOf("new_table"));
    Result result = sourceTable.get(new Get("old-key".getBytes()));
    Put put = new Put("new-key".getBytes());
    for (Cell cell: result.rawCells()) {
      put.addColumn(cell.getFamilyArray(), cell.getQualifierArray(), cell.getTimestamp(), cell.getValueArray());
    }
    destinationTable.put(put);
  }  The reason to avoid Put::addCell(…) is that the Cell objects from Result will still carry the older key and you'll receive a WrongRowIOException if you attempt to use it with a Put object initiated with a changed key.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













