Member since 
    
	
		
		
		09-24-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                178
            
            
                Posts
            
        
                113
            
            
                Kudos Received
            
        
                28
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 4652 | 05-25-2016 02:39 AM | |
| 4591 | 05-03-2016 01:27 PM | |
| 1197 | 04-26-2016 07:59 PM | |
| 16799 | 03-24-2016 04:10 PM | |
| 3156 | 02-02-2016 11:50 PM | 
			
    
	
		
		
		11-26-2024
	
		
		09:11 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Just for clarification you have to use this now - (?s)(^.*$) 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-15-2022
	
		
		05:24 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 This is working fine. Can we provide Search Value and Replacement Value as Variable or flowfile attribute. As I wanted to use same replace text processor to convert different input files with different number of columns. Basically I want to parameterised the Search Value and Replacement Value in replace text processor. @mpayne @ltsimps1 @kpulagam @jpercivall @other  
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		04-13-2020
	
		
		12:25 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,   can I instead add the following line to spark-defaults.conf file:  spark.ui.port   4041  Will that have the same effect ?  Thanks    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-18-2018
	
		
		07:19 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi Josh,  In the phoenix datatype description ( link ), its mentioned that Phoenix Unsigned data types map to Hbase Bytes.toBytes method . Is there a way to utilize these unsigned data types to map existing Hbase data to Phoenix tables and be able to read the data correctly from Phoenix. I mapped numbers inserted through Hbase Shell to Unsigned_int datatype in phoenix but i was still getting same error that bsaini was getting in the above question. Could you please clarify if we can use Unsigned_Int in the above scenario.  Thanks 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		02-15-2017
	
		
		01:34 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @bsaini, you can keep it as an int or float representing Unix timestamp in seconds (float if you want to use sub-second units up to nanosec), or a string. From what I see here: Timestamps are interpreted to be timezoneless and stored as an offset from the UNIX epoch. Convenience UDFs for conversion to and from timezones are provided ( to_utc_timestamp ,  from_utc_timestamp ). 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-27-2017
	
		
		02:15 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 According to this post HDP uses UTC as default but a simple Hive Statement like this and multiple JIRA Issues proves that isn't true.   select concat('Example: ',cast(cast('2017-03-12 02:00:00' as timestamp) as string));
Example: 2017-03-12 03:00:00
  Can someone provide guidance on how to set the JVMs Timezone? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-12-2017
	
		
		05:18 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Matt, couple of follow up questions on Processor group with multiple input ports;   1) within the processor group, how do you distinguish between flowfiles that are coming from the various input ports. 2) in data provenance screen, is there a way to tell which flowfiles are from which input ports 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-04-2016
	
		
		02:53 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Thank you @bsaini it worked great. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		05-30-2016
	
		
		12:46 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi,  If I understand we can start multiple nfs gateway server on multiple servers (datanode, namenode, client hdfs).  if we have (servernfs01, servernfs02, servernfs03) and (client01, client02)  client01# : mount -t nfs servernfs01:/ /test01
client02# : mount -t nfs servernfs02:/ /test02  My question is how to avoir a service interruption ? What's happened if servernfs01 is failed ?  How to keep access to hdfs for client01, in this case ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		01-22-2016
	
		
		08:38 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							  @bsaini @zblanco @Rafael Coss @Balu I was able to run through the tutorial on my own built machine with HDP 2.3.4, albeit doing something wrong with paths, it works. Granted I was using the latest HDP 2.3 tutorial https://github.com/ZacBlanco/hwx-tutorials/blob/master/2-3/tutorials/define-and-process-data-pipelines-with-falcon/define-and-process-data-pipelines-with-apache-falcon.md where there are no CLI commands for falcon.   
						
					
					... View more