Member since 
    
	
		
		
		04-03-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                164
            
            
                Posts
            
        
                8
            
            
                Kudos Received
            
        
                4
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2905 | 03-09-2021 10:47 PM | |
| 3913 | 12-10-2018 10:59 AM | |
| 6879 | 12-02-2018 08:55 PM | |
| 11445 | 11-28-2018 10:38 AM | 
			
    
	
		
		
		03-09-2021
	
		
		10:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,  I am able to replicate this in my cluster.. But I tested in CDH 6.     Shell output:-  [root@host-10-17-102-176 hive]# locale  LANG=en_US.UTF-8  LC_CTYPE=UTF-8  LC_NUMERIC="en_US.UTF-8"  LC_TIME="en_US.UTF-8"  LC_COLLATE="en_US.UTF-8"  LC_MONETARY="en_US.UTF-8"  LC_MESSAGES="en_US.UTF-8"  LC_PAPER="en_US.UTF-8"  LC_NAME="en_US.UTF-8"  LC_ADDRESS="en_US.UTF-8"  LC_TELEPHONE="en_US.UTF-8"  LC_MEASUREMENT="en_US.UTF-8"  LC_IDENTIFICATION="en_US.UTF-8"  LC_ALL=     Oozie Launcher, capturing output data:  =======================  LANG=  LC_CTYPE="POSIX"  LC_NUMERIC="POSIX"  LC_TIME="POSIX"  LC_COLLATE="POSIX"  LC_MONETARY="POSIX"  LC_MESSAGES="POSIX"  LC_PAPER="POSIX"  LC_NAME="POSIX"  LC_ADDRESS="POSIX"  LC_TELEPHONE="POSIX"  LC_MEASUREMENT="POSIX"  LC_IDENTIFICATION="POSIX"  LC_ALL=     To fix this kindly make the below configuration change.     Access the CM and navigate to the Yarn Configuration > Containers Environment Variable (yarn.nodemanager.admin-env) --> And append these properties "LC_ALL=en_US.UTF-8,LANG=en_US.UTF-8" to this config. Restart the affected services to make the changes permanent.     Post this kindly re run the oozie job and check the output. In my cluster it shows like this post making the change.      Oozie Launcher, capturing output data:  =======================  LANG=en_US.UTF-8  LC_CTYPE="en_US.UTF-8"  LC_NUMERIC="en_US.UTF-8"  LC_TIME="en_US.UTF-8"  LC_COLLATE="en_US.UTF-8"  LC_MONETARY="en_US.UTF-8"  LC_MESSAGES="en_US.UTF-8"  LC_PAPER="en_US.UTF-8"  LC_NAME="en_US.UTF-8"  LC_ADDRESS="en_US.UTF-8"  LC_TELEPHONE="en_US.UTF-8"  LC_MEASUREMENT="en_US.UTF-8"  LC_IDENTIFICATION="en_US.UTF-8"  LC_ALL=en_US.UTF-8     Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-09-2021
	
		
		10:19 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi,     What's the CDH version you are using currently on which you are seeing this issue?  Can you share the workflow.xml and the script that you are running?  Also kindly share the oozie launcher logs.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-11-2020
	
		
		03:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, 
   
 NOTE:- Parquet is hard coded to write the temporary data in /tmp even though the target directory is different. 
   
 Kindly check /tmp for intermediate data, you will see it there. 
   
 Regards 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-10-2019
	
		
		10:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     Can you please share the sqoop command that you are running?     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-07-2019
	
		
		08:59 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     You can use this tag "--temporary-rootdir" to make sure that the temporary data goes into this folder.     Example:- [example]  sqoop import --target-dir /<hdfs path>/<import_data_dir> --temporary-rootdir /<dfs path>/<tmp_dir> ...     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-30-2019
	
		
		07:00 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     You can try like below example.     Data in source  ###########     [root@host-10-17-103-77 ~]# sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query "create table test_partition(id int,name varchar(30),par varchar(20))"  [root@host-10-17-103-77 ~]# sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query "insert into test_partition values(1,'nitish','par1')"  [root@host-10-17-103-77 ~]# sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query "insert into test_partition values(2,'mohit','par2')"  [root@host-10-17-103-77 ~]# sqoop eval --connect $MYCONN --username $MYUSER --password $MYPSWD --query "insert into test_partition values(3,'mohit123','par3')"     Table in Hive:-  ############    0: jdbc:hive2://host-10-17-103-79.coe.clouder> create external table test_sqoop_par(id int,name string) partitioned by (par string) row format delimited fields terminated by '\t' location '/user/systest/test_sqoop_par';     NOTE:- partition column as string is only supported.     Sqoop command:-  ##############  [root@host-10-17-103-77 ~]# sqoop import --connect $MYCONN --username $MYUSER --password $MYPSWD --hcatalog-database default --hcatalog-table test_sqoop_par --table TEST_PARTITION -m1 --hive-partition-value par     Data in Hive:-  ##########  0: jdbc:hive2://host-10-17-103-79.coe.clouder> show partitions test_sqoop_par;  +------------+  | partition  |  +------------+  | par=par1   |  | par=par2   |  | par=par3   |  +------------+     [root@host-10-17-103-77 ~]# hadoop fs -ls /user/systest/test_sqoop_par  Found 3 items  drwxr-xr-x   - systest supergroup          0 2019-09-30 18:49 /user/systest/test_sqoop_par/par=par1  drwxr-xr-x   - systest supergroup          0 2019-09-30 18:49 /user/systest/test_sqoop_par/par=par2  drwxr-xr-x   - systest supergroup          0 2019-09-30 18:49 /user/systest/test_sqoop_par/par=par3     Hope above helps.     Regards  Nitish    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-26-2019
	
		
		07:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     Not sure at this point what is the issue on that host as I am not able to debug the host related issue.     Have you configured Sqoop gateway on both the hosts?     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-26-2019
	
		
		12:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This means that from the master node the job is failed to be created but when you are running the job from slave node then it is getting created.     Am I right?     If yes then I would request you to check the difference b/w the hosts. Also check if the sqoop gateway has been deployed? If yes then on what node.     There is some setup issue in your cluster which is causing this.     Regards  Nitish 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-26-2019
	
		
		12:08 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,     I would request you to configure Sqoop gateways on that hosts and install metastore.     KB:- https://my.cloudera.com/knowledge/Creating-and-Executing-Sqoop-Jobs-Remotely-|-Sqoop1-Shared-Metastore-Database?id=70942     Regards  Nitish 
						
					
					... View more