Member since
09-28-2016
59
Posts
15
Kudos Received
0
Solutions
10-23-2017
12:05 PM
Hi Team , to pull the data from servers , we are running ssh commands using ExecuteStreamCommand processor . looks like some ssh commands/flowfiles are blocked in this processor due to no response from server(it is acceptable and common in server) . if it is the case ,this processor has to terminate the flowfile , but it is not and after sometime , ExecuteStreamCommand processor is hanging up . Queue is showing some no of flow files ,but could not able to clear the queue in Nifi UI . looks ExecuteStreamCommand processor is trying or locked in it .
what will be the solution to overcome this situation even if it is unable to process or run the ssh commads .
Error we got in Nifi-app.log :
2017-03-27 04:03:56,125 ERROR [Timer-Driven Process Thread-82] o.a.n.p.standard.ExecuteStreamCommand ExecuteStreamCommand[id=5bda45d9-6601-14f0-3a44-294b3f5d994e] Transferring flow file StandardFlowFileRecord[uuid=633b08a2-f84d-4dc7-9a8c-175c5bc13b2e,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1490600595096-9, container=default, section=9], offset=0, length=0],offset=0,name=jmbfis01.northamerica.delphiauto.net,size=0] to output stream. Executable command ssh ended in an error:
2017-03-27 04:03:56,125 WARN [Timer-Driven Process Thread-84] o.a.n.p.standard.ExecuteStreamCommand ExecuteStreamCommand[id=5bda45d9-6601-14f0-3a44-294b3f5d994e] Processor Administratively Yielded for 1 sec due to processing failure
2017-03-27 04:03:56,125 WARN [Timer-Driven Process Thread-84] o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding ExecuteStreamCommand[id=5bda45d9-6601-14f0-3a44-294b3f5d994e] due to uncaught Exception: java.lang.IllegalStateException: Partition is closed
2017-03-27 04:03:56,126 WARN [Timer-Driven Process Thread-84] o.a.n.c.t.ContinuallyRunProcessorTask
java.lang.IllegalStateException: Partition is closed
at org.wali.MinimalLockingWriteAheadLog$Partition.update(MinimalLockingWriteAheadLog.java:945) ~[nifi-write-ahead-log-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at org.wali.MinimalLockingWriteAheadLog.update(MinimalLockingWriteAheadLog.java:238) ~[nifi-write-ahead-log-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:210) ~[nifi-framework-core-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.updateRepository(WriteAheadFlowFileRepository.java:178) ~[nifi-framework-core-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:363) ~[nifi-framework-core-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:305) ~[nifi-framework-core-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28) ~[nifi-api-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) ~[nifi-framework-core-1.1.0.2.1.2.0-10.jar:1.1.0.2.1.2.0-10]
... View more
Labels:
- Labels:
-
Apache NiFi
09-21-2017
01:37 PM
my few spark tasks in the job are failing due to memory overhead exception , those are retried by 4 times by removing and adding executors , if the issue persist still after the 4th attempt , the spark job is terminated . I would like to ignore that particular task (or causing that particular partition data ) and continue the execution for remaining partitions or tasks . I knew we cabn resolve this issue by adding more sufficient value to spark's memoryoverhead property . But i don't want to use more memory . Please let me know is it possible or not . if possible , what will be the approach . Thanks in Advance
... View more
Labels:
- Labels:
-
Apache Spark
09-06-2017
05:54 PM
Using textFileStream in spark streaming , we are getting the new files from a directory but if any existing file loaded again with content modification in the directory . facing inconsistency like some times those are able to read in spark stream program some times not . what will be the reason and the solution for this . Thanks in Advance .
... View more
Labels:
- Labels:
-
Apache Spark
07-26-2017
11:26 AM
I am newbie for pyspark , i could not able to get pyspark exception handling in transformations . for example i am calling a function on each line of map transformation , i would like to handle few exceptions in that functions and log them . exampleof my code : .map(lambda eachone : ultility.function1(eachone , somesrgs)) , here in function1 i would like to handle exceptions . please provide an example for better understanding . thank you .
... View more
Labels:
- Labels:
-
Apache Spark
04-11-2017
02:01 PM
We are getting "NameNode High Availability Health"alert frequently from ambari notifications .
Here ,we have observed Standby or Active nodes frequently going to Unknown stage and coming back immidiately .
Active['xxxxxxx:50070'],
Standby[], Unknown['xxxxxx"50070'] .
There are log records are captured related to this in ambari-alerts.log .
What will be the reason to change Active/Standby to Unknown . ?
What will be the solution for this . Thanks in Advance .
Looks like the issue simmilar to : https://community.hortonworks.com/articles/74422/alerts-for-namenode-ha.html
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
04-04-2017
10:22 AM
we observed there is folder with hbase table name is created under /tmp . there are _temporary folders , part-m files are there in those . what might be the create/storing data here ?
Is there any data loss or any table corruption happens if we try to delete those temporary table directory ?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
04-04-2017
05:18 AM
Thanks @rmaruthiyodan . Please provide some more clarity for your reply . you have given 'alter table' instead of 'alter index', that is fine . you mentioned SCHEMA.INDEXNAME ,SCHEMA is related to table as i knew , Is it related to index s well ?
I dont have ant SCHEMA for the table ,what will bethe default SCHEMA name ? Thanks
... View more
04-03-2017
02:44 PM
Thanks @mqureshi . can we add 'snappy compression' with alter phoenix command ? if yes , could please provide the syntax for it . Thanks you .
... View more
04-03-2017
01:46 PM
1 Kudo
We are using phoenix layer on top of Hbase table & created hbase tables via phoenix and appied snappy compression .
is it possible to apply snappy compression or any compression on phoenix seconday index tables also ? if yes ,could you please share the syntax to use . Thanks .
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
03-09-2017
08:12 AM
2017-03-08 17:39:56,598 INFO [B.fifo.QRpcServer.handler=40,queue=0,port=16020] shortcircuit.ShortCircuitCache: ShortCircuitCache(0x5d8b409d): could not load 1122567351_BP-1119343884-10.192.24.155-1480056022466 due to InvalidToken exception. org.apache.hadoop.security.token.SecretManager$InvalidToken: access control error while attempting to set up short-circuit access to /apps/hbase/data/data/default/HBASE_TABLE_NAME/28610685b5622e4352e32afa842b45b0/INFO/4962a35101d5419989bdcc417d8d85f3
at org.apache.hadoop.hdfs.BlockReaderFactory.requestFileDescriptors(BlockReaderFactory.java:589)
at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:488)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:784)
at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:718)
at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:898)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:955)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.hbase.io.hfile.HFileBlock.readWithExtra(HFileBlock.java:679)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1412)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1625)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1504)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:439)
at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:636)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:586)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
03-08-2017
12:42 PM
@Jay SenSharma I am able to acess above link . Result of the above URL : {"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":8,"fileId":16386,"group":"hdfs","length":0,"modificationTime":1480691521527,"owner":"hdfs","pathSuffix":"","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}
-----------------------------------
I have mentioned right value to right FQDN value to hadoop.proxyuser.root.hosts .
Still Having the issue
... View more
03-08-2017
11:45 AM
I have mentioned FQDN only in Ambari .mentioned wrongle here , sorry for that .
... View more
03-08-2017
11:39 AM
We have added above 2 properties as you mentioned. still we are facing the same issue
... View more
03-08-2017
11:25 AM
PFA for whole issue we are facing . hive-view-error-log.txt using Ambari 2.4.1 and HDP2.5 . FYI - Ambari server has started with root user. we have configured below properties in Ambari. hadoop.proxyuser.root.groups=* hadoop.proxyuser.root.hosts=FQDN(ambari-serverhost)
Getting below error in Hive view . Error : Caused by: org.apache.hadoop.security.authorize.AuthorizationException: Unauthorized connection for super-user: root from IP xx.xx.xx.xx at sun.reflect.GeneratedConstructorAccessor3693.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:509)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:487)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:113)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathOutputStreamRunner$1.close(WebHdfsFileSystem.java:936)
at org.apache.ambari.view.utils.hdfs.HdfsUtil.putStringToFile(HdfsUtil.java:48)
... 101 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: root from IP xx.xx.xx.xx at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:118)
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:477)
... 104 more
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
03-03-2017
08:03 AM
1 Kudo
Hi , As we knew , we can connect to phoenix/hbase in 2 ways . 1 . phoenix query server using thin-client , this way we can pass only 1 phoenix query server ip/host to the connect string . 2. zookeeper jdbc - we can pass all zookeeper ip's as part connection URL . 1.What is the difference between them ? 2. Which one can give more performance ? 3. How can we can get load balance if we go with phoenix-query server JDBC (thin-client) connection ? 4. which one is the best method to get read performance ?
... View more
Labels:
- Labels:
-
Apache Phoenix
01-26-2017
05:17 PM
Thanks for your guidelines @Matt I am sure each flow file is having grade and branch attributes . as you said , We are getting NULL(no value set) values not empty for branch and grade in my case ,We have 3 conditions in which data will be : 1. sometimes we are getting NULL value for grade , not for branch . 2.sometimes , NULL for branch ,some value for grade . 3. sometimes both fields are getting NULL values . ex : ename, eno , salary , branch ,grade srini,1,10000,branch-A, sai,2,2000,,grade-A sai,3,,
As i understood , conditions are wokr with 'AND' caluse , will above solution work for my case ? Thanks in Advance .
... View more
01-26-2017
03:58 AM
1 Kudo
IS there any way to assign a default value if an attribute of a flowfile is NULL or empty in Nifi flow . ex: We are getting realtime streaming data (ename, empNO , salary , grade,branch) , now and then We are getting null values for the grade and branch fields . We would like to assign 'nograde','branch-none' values to grade,branch attributes in Nifi processor . Please let us know the way . Thanks in advance .
... View more
Labels:
- Labels:
-
Apache NiFi
12-15-2016
08:31 PM
1 Kudo
Hi ,
Is it possible to create an email alert when there is no flow files ,means no data is going through a connection or a processor ?
Please provide your suggestions . thanks in advance .
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache NiFi
12-14-2016
04:07 AM
1 Kudo
Is there any way to archive HBase data like "Hadoop Archive" ?
... View more
Labels:
- Labels:
-
Apache HBase
11-05-2016
05:22 AM
The below input and script is working in http://jolt-demo.appspot.com/#andrewkcarter2 , But it is not working in Nifi JolttransformationJson .
This example is concatenation of firstname and lastname . I understood it is becuase of not having "Operation : modify-overwrite-beta" in existing Nifi - JoltTransformation plugin .
Is it possible to get this function by custom ? If yes, Please provide the process to implement .
or let us know the process to ultilize it . Thanks . @Yolanda M. Davis - Please through some light on this .Thanks . Input :
{
"data": [{
"IDF": "123",
"FirstName": "George",
"LastName": "Kosher",
"PaymentInfo": [{
"Type": "ABC",
"Text": "Soft",
"Amount": 3
}, {
"Type": "ABC",
"Text": "Text",
"Amount": 5
}],
"PaymentCard": [{
"CardNumber": "12345",
"CardType": "Credit"
}, {
"CardNumber": "56789",
"CardType": "Credit"
}]
}, {
"IDF": "456",
"FirstName": "Mill",
"LastName": "King",
"PaymentInfo": [{
"Type": "ABC",
"InstructionText": "Hard",
"Amount": 6
}, {
"Type": "ABC",
"InstructionText": "Text",
"Amount": 8
}],
"PaymentCard": [{
"CardNumber": "12345",
"CardType": "Credit"
}, {
"CardNumber": "56789",
"CardType": "Credit"
}]
}]
}
Script :
[
{
"operation": "shift",
"spec": {
"data": {
"*": { // data arrayf
"*": "data[&1].&", // pass thru stuff
"PaymentInfo": {
"*": {
"Amount": "data[&3].Amount[]",
"Text": "data[&3].PaymentText[]",
"InstructionText": "data[&3].PaymentText[]"
}
},
"PaymentCard": {
"0": {
"CardType": "data[&3].CardType"
}
}
}
}
}
},
{
"operation": "modify-overwrite-beta",
"spec": {
"data": {
"*": { // data array
"Name": "=concat(@(1,FirstName),' ',@(1,LastName))",
"Amount": "=sum" // should work on Jolt 0.0.24
}
}
}
},
{
"operation": "remove",
"spec": {
"data": {
"*": { // data array
"FirstName": "",
"LastName": ""
}
}
}
}
]
Output :
{
"data" : [ {
"IDF" : "123",
"Amount" : [ 3, 5 ],
"PaymentText" : [ "Soft", "Text" ],
"CardType" : "Credit",
"Name" : "George Kosher"
}, {
"IDF" : "456",
"Amount" : [ 6, 8 ],
"PaymentText" : [ "Hard", "Text" ],
"CardType" : "Credit",
"Name" : "Mill King"
} ]
}
... View more
Labels:
- Labels:
-
Apache NiFi
11-04-2016
10:36 PM
first of all thank you @Yolanda M. Davis for quick response . Correct me if i am wrong , the above solution may work only eliminate duplicate json records based on one field, but we have a senarios like eliminating duplicates based on multiple fields . in the below example domain,location,time,function,unit Please provide the scripts to process in jolt . Thanks . or I can say simply eliminate duplicate json files from array of json Input : [{ "domain": "www.google.com", "location": "newyork", "time": "CDT UTC-0500", "function": "PACK", "unit": "PACK_ESR" }, { "domain": "www.yahoo.com", "location": "newyork", "time": "CDT UTC-0500", "function": "PACK", "unit": "PACK_ESR" }, { "domain": "www.google.com", "location": "newyork", "time": "CDT UTC-0500", "function": "AOI_S1", "unit": "AOI_L31" }, { "domain": "www.google.com", "location": "newyork", "time": "CDT UTC-0500", "function": "ALIGN", "unit": "ALIGN2" }, { "domain": "www.yahoo.com", "location": "newyork", "time": "CDT UTC-0500", "function": "PACK", "unit": "PACK_ESR" }, { "domain": "www.google.com", "location": "texas", "time": "CDT UTC-0500", "function": "PACK", "unit": "PACK_ESR" }, { "domain": "www.hortonworks.com", "location": "newyork", "time": "CDT UTC-0500", "function": "ALIGN", "unit": "ALIGN2" } ] Desired output : [{ "domain": "www.google.com", "location": "newyork", "time": "CDT UTC-0500", "function": "PACK", "unit": "PACK_ESR" }, { "domain": "www.yahoo.com", "location": "newyork", "time": "CDT UTC-0500", "function": "PACK", "unit": "PACK_ESR", { "domain": "www.google.com", "location": "texas", "time": "CDT UTC-0500", "function": "PACK", "unit": "PACK_ESR" }, { "domain": "www.hortonworks.com", "location": "newyork", "time": "CDT UTC-0500", "function": "ALIGN", "unit": "ALIGN2" } ]
... View more
11-04-2016
08:31 PM
I am trying to remove duplicate json records from json array using Jolt transformation . Here is an example i tried :
Input :
[
{
"id": 1,
"name": "jeorge",
"age": 25
},
{
"id": 2,
"name": "manhan",
"age": 25
},
{
"id": 1,
"name": "george",
"age": 225
}
] Jolt script :
[
{
"operation": "shift",
"spec": {
"*": {
"id": "[&1].id"
}
}
}
] Output :
[ {
"id" : 1
}, {
"id" : 2
}, {
"id" : 1
} ] getting only selected records . along with that , i would like to remove duplicates .
Desired Output :
[ {
"id" : 1
}, {
"id" : 2
} ] Please provide the necessary script which will help me . Thanks in advance .
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache NiFi
10-31-2016
04:12 PM
means , it is length of row , not key length . for better understanding , please take a look at below ex , row8 is the row-key value of a row , age=88,name=srini88,no=8 are the values of a row .
so , here what will be the value which compares against the MAX_ROW_LENGTH . is it row8 length or whole row length(which is the combination of row8,88,srini88,8) ? and Is MAX_ROW_LENGTH possible to modify in hbase-site or for specific table ? if yes , please let us know how it is ? ex : row8 column=0:age, timestamp=1475378868472, value=88 row8 column=0:name, timestamp=1475378868438, value=srini8 row8 column=0:no, timestamp=1475378868384, value=8
... View more
10-31-2016
03:33 PM
1 Kudo
Hi , In one my application , i am getting illegalargument exception 38121 is > 32767 . so i found that MAX_ROW_LENGTH is having constant value of 32767 . Here my question is MAX_ROW_LENGTH values is belongs to row key or only key length ? as per this link https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HConstants.html#MAX_ROW_LENGTH , I unsrtood that it is belong to row length not key length ?? Please correct me if i am wrong .
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Ranger
10-31-2016
03:29 PM
Thank you . I am refering this link https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HConstants.html#MAX_ROW_LENGTH what i understood is MAX_ROW_LENGTH is about whole row not only key . Here is my confusion , how can i take this property is only belongs to key ?
... View more
10-31-2016
02:41 PM
Is MAX_ROW_LENGTH belong to to only key or whole row which consists of all columns ??
... View more
10-28-2016
07:28 PM
Hi , We have followed the same method . It is working successfully most of the time . But sometimes we are getting the below error .
2016-10-28 18:43:03,603 ERROR [Timer-Driven Process Thread-70] o.apache.nifi.processors.standard.PutSQL PutSQL[id=df59f4c8-f60c-4eb3-7fda-882f7ece2d2a] PutSQL[id=df59f4c8-f60c-4eb3-7fda-882f7ece2d2a] failed to process session due to java.lang.IllegalArgumentException: Row length 37812 is > 32767: java.lang.IllegalArgumentException: Row length 37812 is > 32767 2016-10-28 18:43:03,611 ERROR [Timer-Driven Process Thread-70] o.apache.nifi.processors.standard.PutSQL java.lang.IllegalArgumentException: Row length 37812 is > 32767 at org.apache.hadoop.hbase.client.Mutation.checkRow(Mutation.java:545) ~[na:na] at org.apache.hadoop.hbase.client.Put.<init>(Put.java:110) ~[na:na] at org.apache.hadoop.hbase.client.Put.<init>(Put.java:68) ~[na:na] at org.apache.hadoop.hbase.client.Put.<init>(Put.java:58) ~[na:na] at org.apache.phoenix.index.IndexMaintainer.buildUpdateMutation(IndexMaintainer.java:779) ~[na:na] at org.apache.phoenix.util.IndexUtil.generateIndexData(IndexUtil.java:263) ~[na:na] at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:221) ~[na:na] at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:204) ~[na:na] at org.apache.phoenix.execute.MutationState.commit(MutationState.java:370) ~[na:na] at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:459) ~[na:na] at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:456) ~[na:na] at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) ~[na:na] at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:456) ~[na:na] at org.apache.commons.dbcp.DelegatingConnection.commit(DelegatingConnection.java:334) ~[na:na] at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.commit(PoolingDataSource.java:211) ~[na:na] at org.apache.nifi.processors.standard.PutSQL.onTrigger(PutSQL.java:371) ~[na:na] at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) ~[nifi-api-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579] at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1064) ~[nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579] at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579] at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.0.0.2.0.0.0-579.jar:1.0.0.2.0.0.0-579] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_91] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
FYI - we are using Nifi1.0 and we dont have each rowlength more than 500 bytes .
firsttime when we got the error , we just cleaned the queue and restarted . its working fine later . once again we got th error . restarting i snot the good solution , and we are loosing the data if we do that .
PFA for more information .
... View more
10-25-2016
03:08 AM
1 Kudo
nifi - how to configure a logback.xml in nifi to capture seperate log files for each nifi processor in terms of debug and error mode ?? Thanks .
... View more
Labels:
- Labels:
-
Apache NiFi
10-25-2016
03:04 AM
1 Kudo
How to debug each nifi processor interms of how much RAM, network , how many flowfiles ..etc . Or is it possible to write seperate log files for each nifi processor ? Please let us know the process . thanks
... View more
Labels:
- Labels:
-
Apache NiFi
10-25-2016
02:59 AM
Nifi -- Without writing to disk , is it possible to send flowfile from one peocessor to another ? Ex: i have 3 processors in order- splitjson , evaluatejsonpath , updateattribute processors . I would like to process only these 3 processors in-memory (from the output of splitjson to update attribute o/p). if it is possible only selected processors in-memory . let us know the process ??
... View more
Labels:
- Labels:
-
Apache Falcon
-
Apache NiFi