Member since
12-19-2022
6
Posts
0
Kudos Received
0
Solutions
03-13-2023
10:43 AM
Hi @lben , It seems what you need is the Wait\Notify processors . For more information on how to use those processors : https://pierrevillard.com/2018/06/27/nifi-workflow-monitoring-wait-notify-pattern-with-split-and-merge/ https://www.youtube.com/watch?v=ALvzZ6D4GtA let me know how that works for you. If that helps please accept solution. Thanks
... View more
02-23-2023
04:47 AM
Hello, in K8S I have deployed Nifi with 2 nodes and Request : 2 cpu 4Gi RAM / Limits : 8 cpu 16Gi RAM. Heap init is 2G and max 4. My pipeline works with data requested from postgre database less than 10 millions rows (one table). When I have more data (e.g 11 millions rows) that I want to get with executesql record my pipeline fail with this log : 2023-02-23 08:47:48,466 ERROR [Index Provenance Events-2] o.a.n.p.index.lucene.EventIndexTask Failed to index Provenance Events
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed And next log is : Caused by: java.nio.file.FileAlreadyExistsException: /opt/nifi/nifi-current/provenance_repository/lucene-8-index-1676991780200/_48.cfs What can be the reasons ? It's like RAM/CPU are not enough but not sure. 11 millions doesn't seem to be alot.
... View more
Labels:
- Labels:
-
Apache NiFi
02-13-2023
09:23 AM
Not ruling out something environmental here, but what is being observed is validation working and processor execution not while both those processes should be using the same basic code. The 3 loggers that would produce Debug logging output suggested in my previous post may shed more light on the difference in the output logging when validation is done versus running (starting) the processor. So that is probably the best place to start.
... View more
12-19-2022
04:27 AM
I am using external zookeeper and I have applied this yaml in order to have Nifi in cluster mode in Kubernetes (see below) apiVersion: apps/v1
kind: Deployment
metadata:
name: nifi
labels:
name: nifi
app: nifi
annotations:
app.kubernetes.io/name: nifi
app.kubernetes.io/part-of: nifi
spec:
revisionHistoryLimit: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nifi
template:
metadata:
labels:
app: nifi
spec:
automountServiceAccountToken: false
enableServiceLinks: false
restartPolicy: Always
securityContext:
runAsGroup: 1000
runAsUser: 1000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: nifi
image: XXX
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: nifi
- containerPort: 8082
name: cluster
env:
- name: "NIFI_SENSITIVE_PROPS_KEY"
value: "XXX"
- name: ALLOW_ANONYMOUS_LOGIN
value: "no"
- name: SINGLE_USER_CREDENTIALS_USERNAME
value: XXX
- name: SINGLE_USER_CREDENTIALS_PASSWORD
value: XXX
- name: NIFI_WEB_HTTP_HOST
value: "0.0.0.0"
- name: NIFI_WEB_HTTP_PORT
value: "8080"
# - name: NIFI_WEB_PROXY_HOST
# value: 0.0.0.0:8080
# - name: HOSTNAME
# value: "nifi1"
- name: NIFI_ANALYTICS_PREDICT_ENABLED
value: "true"
- name: NIFI_ELECTION_MAX_CANDIDATES
value: "2"
- name: NIFI_ELECTION_MAX_WAIT
value: "1 min"
- name: NIFI_CLUSTER_IS_NODE
value: "true"
- name: NIFI_JVM_HEAP_INIT
value: "3g"
- name: NIFI_JVM_HEAP_MAX
value: "4g"
- name: NIFI_CLUSTER_NODE_CONNECTION_TIMEOUT
value: "2 min"
- name: NIFI_CLUSTER_PROTOCOL_CONNECTION_HANDSHAKE_TIMEOUT
value: "2 min"
- name: NIFI_CLUSTER_NODE_PROTOCOL_MAX_THREADS
value: "15"
- name: NIFI_CLUSTER_NODE_PROTOCOL_PORT
value: "8082"
- name: NIFI_CLUSTER_NODE_READ_TIMEOUT
value: "15"
- name: NIFI_ZK_CONNECT_STRING
value: "zookeeper:2181"
- name: NIFI_CLUSTER_NODE_ADDRESS
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: status.podIP
livenessProbe:
exec:
command:
- pgrep
- java
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
successThreshold: 1
readinessProbe:
exec:
command:
- pgrep
- java
initialDelaySeconds: 180
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 3
successThreshold: 1
resources:
requests:
cpu: 400m
memory: 1Gi
limits:
cpu: 500m
memory: 2Gi
securityContext:
allowPrivilegeEscalation: false
privileged: false
capabilities:
drop:
- ALL
I have one node in my Nifi cluster and node adress is : 0.0.0.0:8080 I would like to have 3 nodes and I think having 0.0.0.0:8080 for nodes is not good. I don't know how to deploy the 3 nodes in Kubernetes and with different node address. How should I do it and modify my yaml ?
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Zookeeper