Member since 
    
	
		
		
		09-25-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                14
            
            
                Posts
            
        
                11
            
            
                Kudos Received
            
        
                2
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 1894 | 09-21-2016 08:03 PM | |
| 4398 | 07-28-2016 05:08 PM | 
			
    
	
		
		
		11-15-2016
	
		
		08:56 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 I am experiencing the same issue with the PutSQL processor, regardless of how many properly formatted SQL Insert statements I pass to the processor, it fails say that the entire batch failed because it was expecting N statements, where N = the number of SQL statements I have passed in, e.g. 30, 50, 100.  It never seems to get past this point. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		11-15-2016
	
		
		06:35 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		5 Kudos
		
	
				
		
	
		
					
							 It sounds like you might be able to use the ScanAttribute Processor to meet your needs.  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-21-2016
	
		
		08:03 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 It really depends on what you are trying to accomplish by splitting the data ingest across so many process groups. While this does provide you with the ability to start and stop data ingest independently of one another, you now have to maintain each of these separate process groups which presents a different problem.  Ideally, you could create a few "templates" that you could leverage for all 50 of the data sources and only need to make changes to the templates rather than maintaining 50 different independent flows. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-28-2016
	
		
		05:08 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		2 Kudos
		
	
				
		
	
		
					
							 You will want to leverage the FaceBook processor referenced in the following how to article:  https://community.hortonworks.com/articles/47854/accessing-facebook-page-data-from-apache-nifi.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-13-2016
	
		
		07:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 It looks like you have the NN1 address hard coded somewhere in your hive-conf.xml file. You will need to change that to be NN H/A-aware. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-08-2016
	
		
		08:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Would this prevent the UI from working at all, or was this just a non-fatal warning filling up the log file? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-07-2016
	
		
		09:47 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 an 07, 2016 9:54:29 AM com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator attachTypes
INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response
Jan 07, 2016 9:54:29 AM com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 resolve
SEVERE: null
java.lang.IllegalAccessException: Class com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can not access a member of class javax.ws.rs.core.Response with modifiers "protected"
  at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
  at java.lang.Class.newInstance(Class.java:368)
  at com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467)
  at com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181)
  at com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81)
  at com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518)
  at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:179)
  at com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:125)
  at com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98)
  at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
  at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
  at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
  at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
  at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
  at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
  at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
  at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
  at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
  at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
  at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
  at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
  at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
  at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
  at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182)
  at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
  at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
  at org.apache.atlas.web.filters.AuditFilter.doFilter(AuditFilter.java:67)
  at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
  at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
  at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
  at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
  at com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203)
  at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130)
  at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
  at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
  at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
  at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
  at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
  at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
  at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
  at org.mortbay.jetty.Server.handle(Server.java:326) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Atlas
 
			
    
	
		
		
		01-05-2016
	
		
		04:58 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Obviously this is an issue in the event that the NN specified goes down for an extended period of time. Looking for a solution for H/A to H/A inter-cluster replication  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Falcon
 
			
    
	
		
		
		12-11-2015
	
		
		11:32 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 A customer recently asked about the security capabilities of HIVE with regards to masking and encrypting specific fields within a HIVE table. After some research, I found that there are 4 such UDFs already available in the HIVE 1.3 release. Unfortunately, the customer was using HDP 2.3.0, which ships with HIVE 0.14, and thus they did NOT have access to these UDFs.  https://cwiki.apache.org/confluence/display/Hive/L...  Being the impatient type, and not wanting to reinvent the wheel, I created a simple github project to back-port these UDFs into, which can be found here:  https://github.com/davidkj69/Backported-UDFs  Then, I harvested the code from the Apache HIVE 1.3 trunk and added it to the repo, changed the library versions to Hive 0.14, and ran mvn build to generate the Backported-UDFs-0.0.1.jar, which I loaded onto HDFS in the /user/davidk directory. I then was able to test all of the functions as follows:   hive> add jar Backported-UDFs-0.0.1.jar;   hive> CREATE function sha1 AS 'org.apache.hadoop.hive.ql.udf.UDFSha1' USING JAR 'hdfs:///user/davidk/Backported-UDFs-0.0.1.jar';   hive> select sha1('ABC');   hive> CREATE function aes_encrypt AS 'org.apache.hadoop.hive.ql.udf.generic.GenericUDFAesEncrypt' USING JAR 'hdfs:///user/davidk/Backported-UDFs-0.0.1.jar';   hive> select base64(aes_encrypt('ABC', '1234567890123456'));   hive> CREATE function aes_decrypt AS 'org.apache.hadoop.hive.ql.udf.generic.GenericUDFAesDecrypt' USING JAR 'hdfs:///user/davidk/Backported-UDFs-0.0.1.jar';   hive> select aes_decrypt(unbase64('y6Ss+zCYObpCbgfWfyNWTw=='), '1234567890123456');   hive> CREATE function sha2 AS 'org.apache.hadoop.hive.ql.udf.generic.GenericUDFSha2' USING JAR 'hdfs:///user/davidk/Backported-UDFs-0.0.1.jar';   hive> select sha2('ABC', 256);   If you or your customers don't want to wait to leverage these UDFs, go to my repo, build the jar, and start using them right away!!  Happ Hadooping  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
	
					
			
		
	
	
	
	
				
		
	
	
			
    
	
		
		
		12-08-2015
	
		
		04:30 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Just to clarify, the cluster in question is different from the one where Falcon is running, i.e. it is a D/R cluster we want to copy data to.. 
						
					
					... View more