Member since
06-02-2020
40
Posts
4
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1480 | 09-30-2020 09:27 AM | |
811 | 09-29-2020 11:53 AM | |
1516 | 09-21-2020 11:34 AM | |
2197 | 09-19-2020 09:31 AM | |
989 | 06-28-2020 08:34 AM |
10-02-2020
10:33 AM
@justenji If you consider various use cases, 1)Consider the input 123;Valid;567;45. The first replace method ( replaceAll('Valid',';') ) will give 123; ;567;45 which on applying the next replace method ( replaceAll(' ',';') ) will give 123;;567;45. This is not the desired output. 2)Consider another input Valid;123;567;Valid;Valid;45. First replace method will give ;123;567; ; ;45 Second replace method will give ;;123;567;;;45. Again, this is not the desired output. To remove all the confusion, what I wanted to do was, separate all the values other than "Valid" with empty spaces. So, I replaced Valid and ; with empty spaces. But, there might be cases where there are empty spaces at the beginning or at the end ( ( 123 567 45) or (123 567 45 )). So, I used trim method to remove outer spaces. Then, you will be left with values other than "Valid" separated by spaces (can be more than 1). So, now, the second replace method will just add one semi-colon, irrespective of the number of spaces between. So, the output will be 123;567;45. Hence, the regex Valid|; will replace both "Valid" and semi-colons.
... View more
10-01-2020
03:31 AM
Hi @Karthik_Sise, Are you using DistributedMapCacheServer(DMCS) controller service along with the DistributedMapCacheClientService(DMCCS) controller service. If not, please add that DMCS too. Note that, port configuration in DMCS should be the same port that is being using by DMCCS. It doesn't matter where you add as long as both DMCS and DMCCS are using the same port.
... View more
09-30-2020
12:47 PM
Hi @calonsca! Please have a look at this spec as well! [ { "operation": "shift", "spec": { "@": "data", "ID": "&", "#${date}": "date", "#${dataset:toLower()}": "dataset" } } ]
... View more
09-30-2020
12:32 PM
Hi @justenji ! Please take a look at the below code and tell me if it is working or if you need any further upgradations. As of now, I have converted the timestamps and added dnr_group. import java.nio.charset.StandardCharsets import org.apache.nifi.components.PropertyValue import groovy.json.JsonSlurper import groovy.json.JsonOutput flowFile = session . get() if ( ! flowFile) return try { def jsonSlurper = new JsonSlurper (); def jsonOutput = new JsonOutput (); def input = flowFile . read() . withStream { data -> jsonSlurper . parse(data) } def pattern1 = 'yyyyMMddHHmmss' ; def tz1 = 'GMT+0200' ; def pattern2 = 'yyyy-MM-dd HH:mm:ss' ; def tz2 = 'GMT' ; input . stand = convertDatePattern(input . stand,pattern1, TimeZone . getTimeZone(tz1),pattern2, TimeZone . getTimeZone(tz2)); for ( int i = 0 ;i < input . table . size();i ++ ){ input . table[i] . elem_stand = convertDatePattern(input . table[i] . elem_stand,pattern1, TimeZone . getTimeZone(tz1),pattern2, TimeZone . getTimeZone(tz2)); def dnr = input . table[i] . dnr . replaceAll( ' \\ (| \\ )' , '' ); def group = input . table[i] . group . replaceAll( ' \\ (| \\ )' , '' ); if (dnr . toInteger() < 10 ){ dnr = '0' + dnr; } if (group . toInteger() < 10 ){ group = '0' + group; } input . table[i] . dnr_group = "V-" + dnr + "-" + group; input . table[i] . remove( 'dnr' ); input . table[i] . remove( 'group' ); } flowFile = session . write(flowFile, { outputStream -> outputStream . write(jsonOutput . toJson(input) . toString() . getBytes( StandardCharsets . UTF_8)) } as OutputStreamCallback ); session . transfer(flowFile, REL_SUCCESS); } catch (e) { log . error( 'Error Occured,{}' , e) session . transfer(flowFile, REL_FAILURE) } def convertDatePattern ( String input , String pattern1 , TimeZone tz1 , String pattern2 , TimeZone tz2 ){ return new Date () . parse(pattern1,input,tz1) . format(pattern2,tz2) . toString(); }
... View more
09-30-2020
11:25 AM
Hi @DataD, Please find the below spec: [ { "operation": "shift", "spec": { "rows": { "*": { "row": { "*": { "@": "[&3].@(3,header[&1])" } } } } } } ] This will give the output as: [ { "header1" : "row1", "header2" : "row2", "header3" : "row3" }, { "header1" : "row4", "header2" : "row5", "header3" : "row6" } ] I didn't convert it to {
"header1" : "row1",
"header2" : "row2",
"header3" : "row3",
"header1" : "row4",
"header2" : "row5",
"header3" : "row6"
} because that is not a valid json as header1,2 and 3 are repeated keys in the same level of the json.
... View more
09-30-2020
11:03 AM
Hi @Biswa, Please look at the below spec: [ { "operation": "shift", "spec": { "*": { "urlTypeName": { "Spring URL": { "@(2,examUrl)": "ExamDashBoardURL[]" } } } } } ] Output will be: { "ExamDashBoardURL" : [ "https://exam.test.com/page/1473161074" ] } Tell me if this is ok.
... View more
09-30-2020
10:53 AM
Hi @Ayaz , @mburgess ! Please have a look at this spec as well! [ { "operation": "shift", "spec": { "*": { "BRANCH_CODE": "[&1].Fields.FLD0001", "CUST_NO": "[&1].Fields.FLD0002", "AC_DESC": "[&1].Fields.FLD0003", "CUST_AC_NO": "[&1].ExternalSystemIdentifier", "#1": "[&1].InstitutionId" } } } ] Just FYI!
... View more
09-30-2020
10:34 AM
1 Kudo
Hi @Nidutt! Use the below Expression Language: ${literal(${allMatchingAttributes("error_field.*"):join(";")}):replaceAll('Valid|;',' '):trim():replaceAll('\s+',';')}
... View more
09-30-2020
09:27 AM
@Sru111 , If possible, please update your nifi version to 1.11.4 or above. You can find load balancing option there. Otherwise, stick to your plan of using primary node only for FetchSFTP processor. You can still do it in your nifi using Remote Process Groups. But, it will become really complex with that.
... View more
09-30-2020
08:06 AM
Hi @Sru111, Setting the FetchSFTP processor to run on primary node is fine. But, if there are multiple files that you need to fetch from SFTP using the same processor, fetching of second file will happen only after you fetch first one (Similarly for the rest). But, you can fetch them simultaneously. So, using all the 3 nodes is preferred for FetchSFTP processor. May I know which version of nifi you are using? I believe, load balance strategy was introduced in 1.11.0 (not sure) but, started working correctly in 1.11.4 version. Regarding PutHDFS, I don't have a clue about it! Sorry!
... View more
09-30-2020
01:27 AM
Hi @praneet , Can you add Authorization inside "Attributes to Send" property and tell me if you are still getting the error?
... View more
09-30-2020
01:19 AM
Hi @Manoj90 , Please terminate original relation from GetHTMLElement. The reason you are getting so many flowFiles is as follows: On the first run, GetHTMLElement processor is returning 10 flowFiles to the "success" relation. As the operation was successful, the input flowFile(10.79 KB) will be routed to original relation as well. Then, you have the same file which was input for GetHTMLElement processor. Again, it starts processing and sends 10 flowFiles to success relation and the original(input) flowFile to original relation. This loop will continue infinitely till you get memory issue.
... View more
09-30-2020
12:44 AM
Hi @kquintero, Can you put the attribute into flowFile content and see if the spaces are removed in the content as well? From what I know, attributes won't show extra spaces, but, the spaces are still present. Even when you hover over those values, you can still see the empty spaces.
... View more
09-29-2020
02:10 PM
Hi @vineet_harkut , To check if the required parameters are valid in MongoDB processor, what I would do is Suppose, there is a query like {"key":"someValue","name":"someOtherValue"}. And "someValue" and "someOtherValue" are values inside the attributes of the incoming flowFile with attribute-names "key" and "name" respectively. Hence, the resulting query in the GetMongo processor will be {"key":"${key}","name":"${name}"}. Before using the GetMongo processor, I would use RouteOnAttribute processor to check if the input to the query is a valid one. If the input is correct, I will make a query to the GetMongo processor. If the document for the query is not found, the GetMongo processor will return empty document (empty string) as content in the flowfile only when Send Empty Result is set to true inside the GetMongo processor. Later, when I use EvaluateJsonPath processor, the flowfile will be routed to "failure" when the flowfile is empty, as empty string is not a valid JSON or to "matched" if the match for the given jsonPath is found. This will put the same content inside the flowfile as $ corresponds to entire JSON object that we are getting from the input flowfIle. Now, you can add the remaining logic.
... View more
09-29-2020
12:06 PM
Hi @Sru111 , Consider the MergeContent processor in the picture as your MergeContent processor. Configure the queue(here, 'success') that acts as the upstream queue for your MergeContent processor. Select the load balance strategy as Single node, then you will get all the files as input to only one of the nodes. (Optional) You can configure the donwstream queue of MergeContent processor to have the load balance strategy as Round Robin, so that the files are distributed among all the nodes in the cluster.
... View more
09-29-2020
11:53 AM
Hi @Sru111 , For FetchSFTP processor, change the Bulletin level to NONE. Then, that error won't appear. If there are any other errors like authentication error or communication timeout, bulletin won't appear for them as well. But, the flowFiles will still be routed to the respective relations.
... View more
09-27-2020
08:08 AM
Hi @wcbdata Can you explain the usage of '#' in the spec you used above: {
"operation": "shift",
"spec": {
"tmp": {
"*": {
"0": {
"*,*,*,*": {
"@(4,runid)": "particles.[#4].runid",
"@(4,ts)": "particles.[#4].ts",
"$(0,1)": "particles.[#4].Xloc",
"$(0,2)": "particles.[#4].Yloc",
"$(0,3)": "particles.[#4].Xdim",
"$(0,4)": "particles.[#4].Ydim"
}
}
}
},
"*": "&"
}
}
... View more
09-23-2020
09:11 AM
Hi @ishantiwari91, Can you elaborate the condition on which you want to merge the flowfiles?
... View more
09-21-2020
11:34 AM
Hi @DarkStar Your Expression Language(EL) should be ${field.value:substring(0,28):toDate("MMM dd,yyyy HH:mm:ss.SSSSSSSSS"):format("yyyy-MM-dd HH:mm:ss.SSSSSS")} In your EL, the pattern you used has 6 "S". But, the input has precision upto 9. Since, you gave 6, it is reading the last 6 characters, i.e., 388267000. I think, that must be the reason.
... View more
09-21-2020
11:16 AM
Hi @GKrishan ! [ { "operation": "shift", "spec": { "MSH_*.*": { "@": "msh_&(1,1).&(1,2)" }, "*": "root.&" } }, { "operation": "shift", "spec": { "msh_*": { "$(0,1)": "&1.seq", "@(1,root)": { "*": "&2.&" }, "*": "&1.&" } } }, { "operation": "shift", "spec": { "*": "msh" } } ] This spec converts { "name": "adwad", "controlid": "65363_738_VI", "MSH_1.AcceptAcknowledgementType": "AL", "MSH_1.SendingFacility.NamespaceID": "6", "MSH_1.SendingApplication.NamespaceID": "HOSP", "MSH_2.AcceptAcknowledgementType": "AK", "MSH_2.SendingFacility.NamespaceID": "7", "MSH_2.SendingApplication.NamespaceID": "HOSP" } to { "msh" : [ { "seq" : "1", "name" : "adwad", "controlid" : "65363_738_VI", "AcceptAcknowledgementType" : "AL", "SendingFacility.NamespaceID" : "6", "SendingApplication.NamespaceID" : "HOSP" }, { "seq" : "2", "name" : "adwad", "controlid" : "65363_738_VI", "AcceptAcknowledgementType" : "AK", "SendingFacility.NamespaceID" : "7", "SendingApplication.NamespaceID" : "HOSP" } ] } meaning, whatever attributes are present in the input json, other than MSH_....., will be added to the final output json. If only, controlid is required, you can use the following spec: [ { "operation": "shift", "spec": { "MSH_*.*": { "@": "msh_&(1,1).&(1,2)" }, "*": "&" } }, { "operation": "shift", "spec": { "msh_*": { "$(0,1)": "&1.seq", "@(1,controlid)": "&1.controlid", "*": "&1.&" } } }, { "operation": "shift", "spec": { "*": "msh" } } ]
... View more
09-21-2020
08:51 AM
Hi @praneet ! Look at the spec I used Input used: { "Подписка":"awdl" } Spec: [ { "operation": "shift", "spec": { "Подписка":"Подпиа" } } ] Output got: {"Подпиа":"awdl"} I was able to perform jolt without any problems in nifi-1.11.4 Tell me if you are still facing the issues.
... View more
09-21-2020
08:21 AM
1 Kudo
Hi @Kilynn ! Firstly, I tried to use regex to replace "} {" (variable space) with "},{". Then, I tried adding square brackets at the beginning and the end ( "[" and "]" ). Then, I got a valid Json. ReplaceText config: Groovy code: import org.apache.commons.io.IOUtils import java.nio.charset.StandardCharsets flowFile = session.get() if (!flowFile) return try { def input = ''; session.read(flowFile, {inputStream -> input = IOUtils.toString(inputStream, StandardCharsets.UTF_8) } as InputStreamCallback); flowFile = session.putAttribute(flowFile,'Content-Type','application/json'); flowFile = session.putAttribute(flowFile,'mime.type','application/json'); flowFile = session.write(flowFile, { outputStream -> outputStream.write(('['+input+']').toString().getBytes(StandardCharsets.UTF_8)) }as OutputStreamCallback); session.transfer(flowFile, REL_SUCCESS); } catch (e) { log.error('Error Occured,{}', e) session.transfer(flowFile, REL_FAILURE); }
... View more
09-19-2020
12:48 PM
Hi @rosa_negra ! Try this spec: [ { "operation": "shift", "spec": { "header": { "*": "&" }, "body": { "dataM": { "*": "&" }, "data": { "*": { "values": { "*": { "@0": "data[&3].values[&1].value", "@(2,source)": "data[&3].values[&1].source" } } } } } } }, { "operation": "shift", "spec": { "data": { "*": { "values": { "*": "data[]" } } }, "*": "&" } }, { "operation": "shift", "spec": { "data": { "*": { "*": { "*": { "@0": "[&3].@(5,&2M[&1].name)" } }, "@(2,namespace)": "[&1].namespace", "@(2,version)": "[&1].version" } } } } ] Firstly, I tried adding source into every values array and then created an array of values. Later, I created the array in desired format. Last spec can also be used as { "operation": "shift", "spec": { "data": { "*": { "value": { "*": { "@0": "[&3].@(5,valueM[&1].name)" } }, "source": { "*": { "@0": "[&3].@(5,sourceM[&1].name)" } }, "@(2,namespace)": "[&1].namespace", "@(2,version)": "[&1].version" } } } }
... View more
09-19-2020
09:31 AM
Hi @justenji ! Please find the attached groovy code. Use it in ExecuteGroovyScript processor. import java.nio.charset.StandardCharsets import groovy.json.JsonSlurper import groovy.json.JsonOutput flowFile = session . get() if ( ! flowFile) return try { def jsonSlurper = new JsonSlurper (); def jsonOutput = new JsonOutput (); def input = flowFile . read() . withStream { data -> jsonSlurper . parse(data) } def tables = input . table; for ( int i = 0 ;i < tables . size();i ++ ){ def pattern = 'yyyyMMdd'; def datum = tables[i] . datum; if (tables[i] . containsKey( 'uhrzvon' )){ pattern = pattern + 'HH:mm' ; datum = datum + tables[i] . uhrzvon; } tables[i] . datum = new Date () . parse(pattern,datum, TimeZone . getTimeZone( 'GMT+0200' )) . format( 'yyyy-MM-dd HH:mm:ss.SSSZ' , TimeZone . getTimeZone( 'GMT' )); } input . table = tables flowFile = session . write(flowFile, { outputStream -> outputStream . write(jsonOutput . toJson(input) . toString() . getBytes( StandardCharsets . UTF_8)) } as OutputStreamCallback ) session . transfer(flowFile, REL_SUCCESS); } catch (e) { log . error( 'Error Occured,{}' , e) session.transfer(flowFile, REL_FAILURE) }
... View more
09-19-2020
07:04 AM
Hi @ammarhassan , Please find the below jolt spec: [ { "operation": "shift", "spec": { "files": { "*": { "*-*": { "$0": "files.&(1,1)File" } } }, "*": "&" } } ]
... View more
09-19-2020
06:40 AM
@Kilynn There are no [] or {}, but, there is no comma(,) between them either. Can you tell if there will be comma between each DATA_MESSAGE record or not? And, if yes, can you tell if you are merging the records using any processor?
... View more
09-19-2020
06:32 AM
Hi @SashankRamaraju Parameter Context Groups(PMG) have to be added manually only. But, you can have a Parameter Context Group specific to environment. I mean, for example, a PMG with name 'Env variables' (say) has DEV values in DEV environment and PROD values in PROD environment. So, you will only be selecting the PMG 'Env variables' in all the environments, but, the config(values) is specific to environment. If it is mandatory to have 3 PMGs, selecting the PMG is manual, yet, if it is ok, you can use something like below. Generate FlowFile config: Generate FlowFile config UpdateAttribute config: UpdateAttribute config Here, I am trying to evaluate #{DEV_env}. And DEV is coming from the attribute env.
... View more
07-04-2020
04:49 AM
@Branana Have a look at my jolt spec! [ { "operation": "shift", "spec": { "Hdr": "header", "Data": { "*": { "Clltrl": { "*": { "*": "body.&[]" } }, "FndngSrce": { "*": { "*": "body.&[]" } }, "UsrDfnd": { "*": "body.&" }, "*": "body.&" } } } } ]
... View more
07-04-2020
04:28 AM
Refer to the post here
... View more
06-28-2020
10:21 AM
Hi @tsvk4u, Please check the following jolt and tell me if it's okay [ { "operation": "shift", "spec": { "*": { "value_schema_id": "&", "system_code": "records[&1].value.&", "event_type": "records[&1].value.&", "metric_name": "records[&1].value.event_detail.&", "metric_id": "records[&1].value.event_detail.&", "create_ts": "records[&1].value.event_detail.&", "global_person_profile_id": "records[&1].value.event_detail.&" } } }, { "operation": "cardinality", "spec": { "value_schema_id": "ONE" } } ]
... View more