Member since 
    
	
		
		
		09-25-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                356
            
            
                Posts
            
        
                382
            
            
                Kudos Received
            
        
                62
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 3247 | 11-03-2017 09:16 PM | |
| 2492 | 10-17-2017 09:48 PM | |
| 5303 | 09-18-2017 08:33 PM | |
| 6060 | 08-04-2017 04:14 PM | |
| 4221 | 05-19-2017 06:53 AM | 
			
    
	
		
		
		10-06-2015
	
		
		05:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 One more thing you can try is to give the absolute path to the hive-site.xml in job-xml tag. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-06-2015
	
		
		12:56 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Any name other than hive-site.xml should have potentially worked. Will update here if I think of anything else. One question (maybe a stupid one) but have you uploaded this on HDFS after changes? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2015
	
		
		11:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I think it should have worked, anyways a workaround you could try is to rename hive-site.xml to oozie-hive-site.xml under script directory and use that path in the workflow xml. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2015
	
		
		10:50 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 I doubt if you can build rpm on your own easily as HDP uses custom packaging scheme /usr/hdp/<version>/<component>. I don't think Maven is used to build these. @ashish@hortonworks.com, @gkesavan@hortonworks.com can you confirm? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-05-2015
	
		
		04:49 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 Apache Drill supports JSON as self describing data format, you can find the usage here. In Hive, HCatalog supports JSON as serde format for reading and writing data into tables. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2015
	
		
		11:39 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You start the datanodes as root user in secure setup. Did you do that? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2015
	
		
		11:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 One possibility this can happen if you previously were on Hive 0.12 and you upgraded the binaries to Hive 0.13 and started metastore, likely datanucleus.autoCreateSchema was set to true. This may have already upgraded some of the tables including the table DBS. Now when you run the schemaTool it fails. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2015
	
		
		10:22 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		3 Kudos
		
	
				
		
	
		
					
							 Flume provides a Host interceptor that will inserts a header with key host or a configured key whose value is the hostname or IP address of the host, based on configuration. If you want to set a specific key value in the header you can use the Static Interceptor. You should be able to use %{host} in a latter sink. If the host name that you want to add is the hostname where the agent is running and its an HDFS sink you can use %{host} directly without the need of interceptor, see here. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-04-2015
	
		
		08:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 For SLES 11 packages try http://ftp5.gwdg.de/pub/opensuse/discontinued/update/ 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		10-02-2015
	
		
		09:20 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 One thing to remember here is that Beeline is just a custom JDBC client that is sending the queries to HiveServer2, typically the beeline is sitting on a different machine than the HiveServer2 machine so using local filesytsem paths will not work as they may not be valid on HiveServer2 machine. So to include additional jars in the classpath, first push the jars into hdfs and then use "add jar hdfs://<hdfs_location_of_your_jar>". This should do it. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
- Next »
 
        













