Member since 
    
	
		
		
		06-02-2020
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                40
            
            
                Posts
            
        
                4
            
            
                Kudos Received
            
        
                8
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 5110 | 09-30-2020 09:27 AM | |
| 3297 | 09-29-2020 11:53 AM | |
| 5215 | 09-21-2020 11:34 AM | |
| 5254 | 09-19-2020 09:31 AM | |
| 3235 | 06-28-2020 08:34 AM | 
			
    
	
		
		
		09-19-2020
	
		
		09:31 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @justenji !  Please find the attached groovy code. Use it in ExecuteGroovyScript processor.       import java.nio.charset.StandardCharsets  import groovy.json.JsonSlurper  import groovy.json.JsonOutput    flowFile = session.get()  if (!flowFile) return    try {   def jsonSlurper = new JsonSlurper();   def jsonOutput = new JsonOutput();     def input = flowFile.read().withStream {    data -> jsonSlurper.parse(data)   }     def tables = input.table;     for(int i=0;i<tables.size();i++){    def pattern = 'yyyyMMdd';    def datum = tables[i].datum;      if(tables[i].containsKey('uhrzvon')){     pattern = pattern + 'HH:mm';     datum = datum + tables[i].uhrzvon;    }    tables[i].datum = new Date().parse(pattern,datum,TimeZone.getTimeZone('GMT+0200')).format('yyyy-MM-dd HH:mm:ss.SSSZ',TimeZone.getTimeZone('GMT'));   }     input.table = tables     flowFile = session.write(flowFile, {    outputStream -> outputStream.write(jsonOutput.toJson(input).toString().getBytes(StandardCharsets.UTF_8))   }as OutputStreamCallback)     session.transfer(flowFile, REL_SUCCESS);  } catch (e) {   log.error('Error Occured,{}', e)   session.transfer(flowFile, REL_FAILURE)  }    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-19-2020
	
		
		07:04 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @ammarhassan ,  Please find the below jolt spec:  [   {    "operation": "shift",    "spec": {     "files": {      "*": {       "*-*": {        "$0": "files.&(1,1)File"       }      }     },     "*": "&"    }   }  ]        
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-19-2020
	
		
		06:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Kilynn   There are no [] or {}, but, there is no comma(,) between them either. Can you tell if there will be comma between each DATA_MESSAGE record or not? And, if yes, can you tell if you are merging the records using any processor? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-19-2020
	
		
		06:32 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @SashankRamaraju   Parameter Context Groups(PMG) have to be added manually only. But, you can have a Parameter Context Group specific to environment. I mean, for example, a PMG with name 'Env variables' (say) has DEV values in DEV environment and PROD values in PROD environment. So, you will only be selecting the PMG 'Env variables' in all the environments, but, the config(values) is specific to environment.  If it is mandatory to have 3 PMGs, selecting the PMG is manual, yet, if it is ok, you can use something like below.             Generate FlowFile config:    Generate FlowFile config  UpdateAttribute config:    UpdateAttribute config  Here, I am trying to evaluate #{DEV_env}. And DEV is coming from the attribute env. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-04-2020
	
		
		04:49 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Branana Have a look at my jolt spec!  [   {    "operation": "shift",    "spec": {     "Hdr": "header",     "Data": {      "*": {       "Clltrl": {        "*": {         "*": "body.&[]"        }       },       "FndngSrce": {        "*": {         "*": "body.&[]"        }       },       "UsrDfnd": {        "*": "body.&"       },       "*": "body.&"      }     }    }   }  ] 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		07-04-2020
	
		
		04:28 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Refer to the post here 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-28-2020
	
		
		08:34 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 Hi @Biswa!  I tried various ways of getting a unique id for each array. Most of them failed. But, I tried to append the array index to the unique id. Even while doing it, I faced many issues. See if the below spec is okay  [   {    "operation": "shift",    "spec": {     "aliases": {      "*": {       "@0": "alias[&1].name",       "$0": "alias[&1].id.${UUID()}-&1"      }     }    }   },   {    "operation": "shift",    "spec": {     "alias": {      "*": {       "id": {        "*": {         "$0": "alias[&3].id"        }       },       "*": "alias[&1].&"      }     }    }   }  ]     
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-28-2020
	
		
		06:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Hi @ravi_sh_DS,  I am perfectly getting the output after using the joltTransformRecord      The spec I used is the same that you mentioned in your question. I don't think that the processor is converting the xml into json at first. Had it been that case, jolt must be acting on  {   "note":{    "to":"...",    "from":"...",    "heading":"...",    "body":"..."   }  }  Even if the jolt is applied on the above json, your ouput will be empty json object. Can you check it once again? 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-27-2020
	
		
		11:48 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @tsvk4u   If you replace records[1] with records[0]. You won't be getting null as the first value inside records array. But, if you have done it deliberately and you want to remove the null values, look at the following spec:  [   {    "operation": "shift",    "spec": {     "records": {      "*": {       "*": "records[]"      }     },     "*": "&"    }   }  ]  Note: This spec is to be applied after you have applied your spec 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		06-14-2020
	
		
		08:40 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 Two solutions:  Before that, I believe, you constructed that(csv) value using groovy/any script. If so, try understanding the following:    inputString = '{"firstName":"John","lastName":"Legend"}'    new JsonSlurper().parseText(inputString) will return MapRecord[{firstName=John, lastName=Legend}].    So, before writing the flowFile, use new JsonOutput().toJson(new JsonSlurper().parseText(inputString)).    This will give {"firstName":"John","lastName":"Legend"} in Json Format(not String)    If that wasn't the case, try replacing 'MapRecord[{' and '}]' with empty string and split the text with "," as delimiter and for each, split again with "=" as delimiter. 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		- « Previous
 - Next »