Member since 
    
	
		
		
		09-29-2015
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                63
            
            
                Posts
            
        
                19
            
            
                Kudos Received
            
        
                8
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 2108 | 09-30-2017 06:13 AM | |
| 1765 | 06-09-2017 02:31 AM | |
| 5580 | 03-15-2017 04:04 PM | |
| 6617 | 03-15-2017 08:37 AM | |
| 1946 | 12-11-2016 01:15 PM | 
			
    
	
		
		
		05-05-2017
	
		
		04:51 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 We have a use case to use "InvokeHttp" for restful service, getting all the details from the DB (URL/username/password), need to pass basic authentication (username and password) via attributes retrieved from DB. Is it possible to pass authentication using attributes ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache NiFi
 
			
    
	
		
		
		03-15-2017
	
		
		04:04 PM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Deleted the journal folder and restarted Nifi did solve the issue. Not sure what caused the issue in the first place. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-15-2017
	
		
		08:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Is there a way to clean up flow files and keep only the attributes after event processing is completed. Need to remove Flow files for security reasons.  Any suggestions if we can delete or any other way of handling the above use case. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache NiFi
 
			
    
	
		
		
		03-15-2017
	
		
		08:47 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,  Apache Nifi doesn't show data Provenance and show status as "Searching provenance events", checked logs it shows the following error.  Any suggestions to resolve the same.  2017-03-15 08:40:06,603 ERROR [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository Failed to merge journals. Will try again. journalsToMerge: [./provenance_repository/journals/2642.journal.0, ./provenance_repository/journals/2642.journal.1, ./provenance_repository/journals/2642.journal.2, ./provenance_repository/journals/2642.journal.3, ./provenance_repository/journals/2642.journal.4, ./provenance_repository/journals/2642.journal.5, ./provenance_repository/journals/2642.journal.6, ./provenance_repository/journals/2642.journal.7, ./provenance_repository/journals/2642.journal.8, ./provenance_repository/journals/2642.journal.9, ./provenance_repository/journals/2642.journal.10, ./provenance_repository/journals/2642.journal.11, ./provenance_repository/journals/2642.journal.12, ./provenance_repository/journals/2642.journal.13, ./provenance_repository/journals/2642.journal.14, ./provenance_repository/journals/2642.journal.15], storageDir: ./provenance_repository, cause: java.lang.RuntimeException: java.io.FileNotFoundException: _1g.fdt  2017-03-15 08:40:06,607 ERROR [Provenance Repository Rollover Thread-2] o.a.n.p.PersistentProvenanceRepository  java.lang.RuntimeException: java.io.FileNotFoundException: _1g.fdt    at org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:258) ~[na:na]    at org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:238) ~[na:na]    at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355) ~[na:1.8.0_101]    at java.util.TimSort.sort(TimSort.java:234) ~[na:1.8.0_101]    at java.util.Arrays.sort(Arrays.java:1512) ~[na:1.8.0_101]    at java.util.ArrayList.sort(ArrayList.java:1454) ~[na:1.8.0_101]    at java.util.Collections.sort(Collections.java:175) ~[na:1.8.0_101]    at org.apache.lucene.index.TieredMergePolicy.findMerges(TieredMergePolicy.java:292) ~[na:na]    at org.apache.lucene.index.IndexWriter.updatePendingMerges(IndexWriter.java:2020) ~[na:na]    at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1984) ~[na:na]    at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3029) ~[na:na]     at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3134) ~[na:na]    at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3101) ~[na:na]    at org.apache.nifi.provenance.lucene.SimpleIndexManager.returnIndexWriter(SimpleIndexManager.java:162) ~[na:na]    at org.apache.nifi.provenance.PersistentProvenanceRepository.mergeJournals(PersistentProvenanceRepository.java:1864) ~[na:na]    at org.apache.nifi.provenance.PersistentProvenanceRepository$8.run(PersistentProvenanceRepository.java:1332) ~[na:na]    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101]    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_101]    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101]    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_101]    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]  Caused by: java.io.FileNotFoundException: _1g.fdt    at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:255) ~[na:na]    at org.apache.lucene.index.SegmentCommitInfo.sizeInBytes(SegmentCommitInfo.java:219) ~[na:na]    at org.apache.lucene.index.MergePolicy.size(MergePolicy.java:478) ~[na:na]    at org.apache.lucene.index.TieredMergePolicy$SegmentByteSizeDescending.compare(TieredMergePolicy.java:248) ~[na:na]    ... 22 common frames omitted  Thanks,  Nagesh 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache NiFi
 
			
    
	
		
		
		03-15-2017
	
		
		08:37 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							@Prakhar  Agrawal There doesn't seems to be an out of box support for xls files. Although you can use something custom for parsing xls files.  One of the examples:  https://community.hortonworks.com/questions/36875/where-to-convert-xls-file-to-csv-file-inside-nifi.html 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-06-2017
	
		
		06:14 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hello,  We are trying to read data from Oracle tables, "Date" based data types are converted into "Timestamp" Data types.   e.g: Table is Oracle.  desc hr.employees;   Name   Null?    Type   ----------------------------------------- -------- ----------------------------   EMPLOYEE_ID   NOT NULL NUMBER(6)   FIRST_NAME    VARCHAR2(20)   LAST_NAME   NOT NULL VARCHAR2(25)   EMAIL   NOT NULL VARCHAR2(25)   PHONE_NUMBER    VARCHAR2(20)   HIRE_DATE   NOT NULL DATE   JOB_ID    NOT NULL VARCHAR2(10)   SALARY     NUMBER(8,2)   COMMISSION_PCT     NUMBER(2,2)   MANAGER_ID    NUMBER(6)   DEPARTMENT_ID    NUMBER(4)   SSN    VARCHAR2(55)  and schema read in the DataFrame in Scala   |-- EMPLOYEE_ID: decimal(6,0) (nullable = false)    |-- FIRST_NAME: string (nullable = true)    |-- LAST_NAME: string (nullable = false)    |-- EMAIL: string (nullable = false)    |-- PHONE_NUMBER: string (nullable = true)    |-- HIRE_DATE: timestamp (nullable = false) (Incorrect data type read here)   |-- JOB_ID: string (nullable = false)    |-- SALARY: decimal(8,2) (nullable = true)    |-- COMMISSION_PCT: decimal(2,2) (nullable = true)    |-- MANAGER_ID: decimal(6,0) (nullable = true)    |-- DEPARTMENT_ID: decimal(4,0) (nullable = true)    |-- SSN: string (nullable = true)  Hire_Date is read incorrectly as TimeStamp, is there a way to correct.  Data is being read from Oracle on the fly and the application does not have an upfront knowledge of datatypes and can't convert it after being read.  Thanks in advance.  Nagesh 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Spark
 
			
    
	
		
		
		03-01-2017
	
		
		03:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Artem Ervits thanks for quick response, we are using these, but we would like to restrict users from creating new notebooks, which is not part of the documentation. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		03-01-2017
	
		
		03:22 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hello, We have an requirement were in we want zeppelin to be used only for reporting for users, is there an configuration setting by which I can disable from creating new notebooks ? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
		
			
				
						
							Labels:
						
						
		
			
	
					
			
		
	
	
	
	
				
		
	
	
- Labels:
 - 
						
							
		
			Apache Zeppelin
 
			
    
	
		
		
		12-22-2016
	
		
		02:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							@Roger Young  Assuming you are using a file on the file system you should be doing the following.  you could use "split text processor" after fetcing the file, to split into single lines and then process as needed further as needed. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		12-21-2016
	
		
		11:25 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @Roger Young since your csv seems to be of fixed columns, you can use an combination of Replace and Extract text.   You can use one the example template here which converts csv to json, the first step is what you need.  https://cwiki.apache.org/confluence/download/attachments/57904847/CsvToJSON.xml?version=1&modificationDate=1442927496000&api=v2  https://cwiki.apache.org/confluence/display/NIFI/Example+Dataflow+Templates 
						
					
					... View more