Member since 
    
	
		
		
		01-03-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                181
            
            
                Posts
            
        
                44
            
            
                Kudos Received
            
        
                24
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2268 | 12-02-2018 11:49 PM | |
| 3124 | 04-13-2018 06:41 AM | |
| 2679 | 04-06-2018 01:52 AM | |
| 2967 | 01-07-2018 09:04 PM | |
| 6509 | 12-20-2017 10:58 PM | 
			
    
	
		
		
		07-09-2021
	
		
		02:58 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @singhvNt, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-29-2020
	
		
		07:33 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Below export command worked for me.  CREATE table departments_export (departmentid int(11), department_name varchar(45), created_date T1MESTAMP);  sqoop export --connect jdbc:mysql://<host>:3306/DB --username cloudera --password *** \  --table departments_export \  --export-dir '/user/cloudera/departments_new/*' \  -m 1 \  --input-fields-terminated-by ',';  Sample input: 103,Finance,2020-10-10 10:10:00 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-01-2020
	
		
		12:55 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 The following map rule is wrong:     RULE:[2:\$1@\$0](rm@MY_REALM)s/.*/rm/      the user for the ResourceManager is not "rm" but "yarn" and this should be the replacement value. This is the same as for the hadoop.security.auth_to_local in Hadoop/HDFS configuration. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-02-2020
	
		
		12:46 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Wont it results into Shuffle Spill without proper memory configuration in Spark Context? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-28-2020
	
		
		08:15 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I want single file in the output which having all the records from array 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-11-2019
	
		
		07:11 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I believe it's a typo. We should use " (double quotes) rather than ' (single quotes). The environment variable $token will be expanded.  curl -k -X GET 'https://<nifi-hostname>:9091/nifi-api/flow/status' -H "Authorization: Bearer $token" --compressed     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		08-02-2018
	
		
		04:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you for your reply @bkosaraju,  but it seems I have no luck with suggested query. I don't see any differences after submit both queries.  I just found this (Hive Transactional Tables are not readable by Spark) question.   as per JIRA tickets, my situation seems caused by exact same problem that is still exists in latest Spark version.  Is there any workarounds to use Hive 3.0 Table (sure, with 'transaction = true', it is mandatory for Hive 3.0 as I know) with Spark?  If not, maybe I should rollback to HDP 2.6... 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-06-2018
	
		
		01:27 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @yogesh turkane,  As I was across, We can achieve this with two ways.   Post the load of the data or with schedule intervals run the "ALTER TABLE <table_name> CONCATENATE" on the table in SQL api this will merge all the small orc files associated to that table. - Please not that this is specific to ORC  Use the data frame to load the data and re-partition write back with overwrite in spark.          The code snippet would be   val tDf = hiveContext.table("table_name")
tdf.rePartition(<num_Files>).write.mode("overwrite").saveAsTable("targetDB.targetTbale")         the second option will work with any type of files.  Hope this helps !! 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-07-2018
	
		
		12:40 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 For me, proxy settings (no matter if they were set at Intellij, SBT.conf or environment variables), did not work. A couple of considerations that solved this issue (for me at least):     - if you use SBT 0.13.16 (not newer that that)   - set Use Auto Import    Then, no "FAILED DOWNLOADS" messages appear.   
						
					
					... View more