Member since
07-09-2016
83
Posts
17
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1260 | 12-08-2016 06:46 AM | |
2201 | 12-08-2016 06:46 AM |
11-15-2016
05:53 AM
It gets stuck at the very first polling and never retrieves any messages.
... View more
11-15-2016
05:08 AM
Adding the below had no effect in the nifi-app.log. <appender name="APP_FILE"> . <logger name="org.apache.nifi.processors.kafka.pubsub.ConsumeKafka" level="DEBUG" /> <logger name="org.apache.nifi.processors.kafka.pubsub.PublishKafka" level="DEBUG" /> . </appender>
... View more
11-15-2016
05:02 AM
1 Kudo
Hi, NiFi has not been able to retrieve the
messages from Kafka, within few secs after the start it throws the below error. The below log detail doesn't provide adequate detail to troubleshoot. Could you please shed some light on tracing the below issue? Kafka version : 0.9.0.2.4.2.0-258 HDF Version: 2.0.1 NiFi log: 22:44:40 EST DEBUG
b24213d2-1008-1158-cec5-cd8961a6f7bc ConsumeKafka[id=b24213d2-1008-1158-cec5-cd8961a6f7bc]
Rebalance Alert: Paritions '[]' revoked for lease
'org.apache.nifi.processors.kafka.pubsub.ConsumerPool$SimpleConsumerLease@6f4d8702'
with consumer 'org.apache.kafka.clients.consumer.KafkaConsumer@31c1244d' 22:44:40 EST DEBUG
b24213d2-1008-1158-cec5-cd8961a6f7bc ConsumeKafka[id=b24213d2-1008-1158-cec5-cd8961a6f7bc]
Rebalance Alert: Paritions '[olem-0, olem-1]' assigned for lease
'org.apache.nifi.processors.kafka.pubsub.ConsumerPool$SimpleConsumerLease@6f4d8702'
with consumer 'org.apache.kafka.clients.consumer.KafkaConsumer@31c1244d' Kafka - server log: [2016-11-14 22:44:40,707] INFO
[GroupCoordinator 1001]: Preparing to restabilize group olem_group with old
generation 0 (kafka.coordinator.GroupCoordinator) [2016-11-14 22:44:40,708] INFO
[GroupCoordinator 1001]: Stabilized group olem_group generation 1
(kafka.coordinator.GroupCoordinator) [2016-11-14 22:44:40,709] INFO
[GroupCoordinator 1001]: Assignment received from leader for group olem_group
for generation 1 (kafka.coordinator.GroupCoordinator) [2016-11-14 22:45:10,711] INFO
[GroupCoordinator 1001]: Preparing to restabilize group olem_group with old
generation 1 (kafka.coordinator.GroupCoordinator) [2016-11-14 22:45:10,711] INFO
[GroupCoordinator 1001]: Group olem_group generation 1 is dead and removed
(kafka.coordinator.GroupCoordinator) [2016-11-14 22:47:17,332] INFO [Group Metadata
Manager on Broker 1001]: Removed 0 expired offsets in 0 milliseconds.
(kafka.coordinator.GroupMetadataManager)
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
11-12-2016
10:45 PM
1 Kudo
Have an external HIVE table that is based on an avro files and one of the column description is as below. When trying to do SELECT on that with header.timestamp, it fails with the error below. Is there way to query columns with the keywords? col_name data_type header struct<versionnum:binary, timestamp:bigint,uuid:binary> Error: Error while compiling statement: FAILED: ParseException line 1:80 Failed to recognize predicate 'timestamp'. Failed rule: 'identifier' in expression specification
... View more
Labels:
- Labels:
-
Apache Hive
11-10-2016
04:57 PM
Yes, but if we need to track how many times the consumer has been going in retries, and notify whenever that happens? Since I do not see a failure relationship for the ConsumeKafka processor, do we have any other workaround/options to achieve this? This is to ensure that we prevent duplicates by identifying the cause and notify whenever that occurs.
... View more
11-10-2016
05:18 AM
1 Kudo
For e.g. PutHDFS processor, if the failure relationship is connected to itself 1) How to control the number of retries of a single flowfile? 2) I see "FlowFile Expiration" can be used to expire the message (meaning discard the Flowfile content if it cannot be reprocessed within a specified period of time). Is there a way to retain the FlowFile content after specified number of retries, that is to persist on a different channel say local file system (PutFile) and perhaps be able to send email notification ?
... View more
Labels:
- Labels:
-
Apache NiFi
11-10-2016
04:04 AM
I only see success relationship available for ConsumeKafka processor. In case of a failure, that is processor failed to commit the batch due to rebalance do we have any workaround/option to redirect to a PutEmail processor?
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
11-10-2016
03:58 AM
In NiFi - ConsumeKafka processor you have the schedule interval: In Kafka, let's consider the following properties: session.timeout.ms = 300000 (5 mins) heartbeat.interval.ms = 60000 (1 min) If the processor scheduler interval is set to say 600 sec (10 min), would the processor still continue to run to maintain the heartbeat? Would the session timeout be specific to each run, per scheduled interval?
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
11-01-2016
10:14 PM
As MergeContent is just concatenating the binary content of the files, the resultant Avro file can no longer be parsed because there would be more than one header line with the schema defined. Is there an option/workaround to just strip the schema header from each message binary content, before they can be merged? We have the schema (assume just same schema) in a static file that can be referenced anytime, but wanted the final merged file just to have the content and not the header details with schema.
... View more
- « Previous
- Next »