Member since 
    
	
		
		
		03-23-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                1288
            
            
                Posts
            
        
                114
            
            
                Kudos Received
            
        
                98
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4395 | 06-11-2020 02:45 PM | |
| 6011 | 05-01-2020 12:23 AM | |
| 3830 | 04-21-2020 03:38 PM | |
| 4069 | 04-14-2020 12:26 AM | |
| 3039 | 02-27-2020 05:51 PM | 
			
    
	
		
		
		06-22-2020
	
		
		03:17 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @ram76,    Any reason why the mv is slow? Are they on different mount points? We did see issues around 2-3 minutes delay, but unfortunately that I am not aware of a workaround. You should focus on finding out why the "move" command took this long.    In my test, it took 2 minutes to copy:      time cp -r /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p3573.3750 /opt/cloudera/parcel-cache/5.13.3
real 1m59.830s
user 0m0.126s
sys 0m5.621s    But my "mv' was instant:      time mv /opt/cloudera/parcel-cache/5.13.3 /opt/cloudera/parcels/5.13.3
real 0m0.138s
user 0m0.000s
sys 0m0.002s    Can you please share the output of :      df -h      So we can see the disk break down?    Thanks  Eric 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-21-2020
	
		
		04:02 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Daddy,    This is a known issue in CM when either unpacking or moving of parcel directory is slow.    When CM unpacks the parcel file, it performs below steps:    1. untar the parcel file under /opt/cloudera/parcel-cache  2. then "move" the parcel files to /opt/cloudera/parcels    If the I/O is slow on the FS that holds directory /opt/cloudera/parcel-cache, the untar command will be slow. If the directories /opt/cloudera/parcel-cache and /opt/cloudera/parcels are mounted on different mount points, then the "move" command will become "copy" and then "delete", so operation will be much slower than the actual "move" which only involves rename.    Since CM uses different threads to perform operations, it can happen that the checking happens before the untar operation finishes, hence you hit the issue.    Can you please check if above is the issue in your case?    So you can perform the same steps I mentioned:    1. manually untar parcel under /opt/cloudera/parcel-cache  2. run "mv" command to move directory from /opt/cloudera/parcel-cache to /opt/cloudera/parcels and time it    If either of the above is slow, then you have found your issue.    Cheers  Eric
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-12-2020
	
		
		05:27 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Hi Heri,    Glad that it helped and thanks for the info.    Cheers  Eric
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-11-2020
	
		
		02:45 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Sorry, can you try below instead?      select max(id) as id from NodeName a where $CONDITIONS      BTW, do you really just want to import single MAX value into HDFS? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-10-2020
	
		
		03:13 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Heri,    As mentioned in the error:      if using free form query import (consider adding clause AS if you're using column transformation)      You max(id) aggregate function does not have "AS" clause, please change your query to below and try again:      select max(id) as max_id from NodeName a where $CONDITIONS      Cheers  Eric 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-01-2020
	
		
		12:23 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							This is old thread, but I have found a workaround, so would like to share here.    Assuming I have a table with with a few partitions:    SHOW PARTITIONS partitioned_table;  +------------+--+  | partition |  +------------+--+  | p=1 |  | p=2 |  | p=3 |  | p=4 |  | p=5 |  +------------+--+    1. create a macro:  CREATE TEMPORARY MACRO partition_value() '1';    2. create view using the macro:  CREATE VIEW view_test AS SELECT * FROM partitioned_table WHERE p = partition_value();    3. query the view:  SELECT * FROM view_test;    4. if you want to update the value returned by the Macro, you need to DROP and CREATE it again:    DROP TEMPORARY MACRO partition_value;  CREATE TEMPORARY MACRO partition_value() '4';    5. If you exit the session, also need to create it again in the next login, as Macro will be destroyed after session ends.    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-26-2020
	
		
		12:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							Please also share the output of below command on the Ranger host:    ls -al /opt/cloudera/parcels/    Just to make sure the CDH version is linked properly.    Cheers  Eric
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-26-2020
	
		
		12:30 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Dombai_Gabor ,  Can you please log onto Ranger's host, go to directory /opt/cloudera/parcels/CDH/lib and then run command:  grep -rni 'RangerRESTUtils' *  Also under directory /opt/cloudera/parcels/CDH/jars and run the same command, just to check if file ranger-plugins-common-2.0.0.7.x.x.x-xxx.jar exist under your CDP installation.     Please share the full output.     Cheers  Eric 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-21-2020
	
		
		03:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							Hi @SwasBigData ,    As Ferenc pointed out, In CDP, HiveServer2 and HiveMetaStore are separated into different services. The Hive Service now only contains HMS, and HiveServer2 is included in the Hive on Tez service.    So to install Hive on Tez, you have to install Hive Service first, which will setup HMS.    Can you please share more details on what error you got when trying to setup Hive on Tez?    Thanks  Eric
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-14-2020
	
		
		12:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This is an old thread, but I will put in my findings recently.    It is a limitation/restriction from Teradata side that data with larger than 64KB are required to use special API to be streamed into Teradata. Currently Sqoop does not make use of this API, so it does not support injecting data larger than 64KB into Teradata.    Improvement JIRA has been requested, but not resolved at this stage. For the time being, have to either reduce the data or use another DB.    I have tested using MySQL's CLOB has no issues.    Cheers  Eric 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		 
        













