Member since 
    
	
		
		
		11-16-2017
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                4
            
            
                Posts
            
        
                0
            
            
                Kudos Received
            
        
                0
            
            
                Solutions
            
        
			
    
	
		
		
		11-19-2017
	
		
		08:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you but what I am loooking for is that:  1: Read   the data from underlying Hive  table.  2: Store the result  of above read into one dataframe.  3: Now apply some basic checks on few columns like  append the KM at the last with column  X,  Also  write the sum of  one column data into tailer record of the file  4: And than  create  a file .     Basically some function which read the  rows one by  one  from hive table and  apply the  checks and than save it to file 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-19-2017
	
		
		08:24 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you  for the reply. But instrea of constant value my requirement is to pouplate it  with uique number. 1,2,3....n 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-16-2017
	
		
		08:54 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, I have data in in my Hive table and I want to read that data in scala-spark,do some transformation on few columns and save the processed data into one file. How can this be done please? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hive
 - 
						
							
		
			Apache Spark
 
			
    
	
		
		
		11-16-2017
	
		
		08:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi, 
 Hi, I have created one dataframe in Spark 1.6  by reading data from MySql Database. In that dataframe there is ID column which is null while loading  from rdbms .Now I would like to insert this Dataframe into Hive table but ID column must be populated with some sequence number(0,1,...n). How can I achieve this in Scala program. I Hive 1.x  hence can't  take benefit  of HIve2.x. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Hive
 - 
						
							
		
			Apache Spark