Member since
08-28-2017
87
Posts
7
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
693 | 07-08-2021 03:56 AM | |
2305 | 07-20-2020 06:54 PM | |
1122 | 06-03-2020 06:53 PM | |
649 | 05-28-2020 01:38 AM | |
1291 | 05-26-2020 01:26 AM |
07-08-2021
03:56 AM
Hi Lucas, Please check if user polices are defined properly and this logged in user have write access. You can refer the below article for more details on user and access policies: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#config-users-access-policies
... View more
05-26-2021
02:05 AM
Hi If your use case is to retrieve data from multiple kafka topic using nifi then you can refer the below articles that can help you in this case: https://community.cloudera.com/t5/Support-Questions/Retrieving-from-multiple-Kafka-topics-through-Nifi-causes/td-p/225277 https://stackoverflow.com/questions/51701582/multiple-kafka-topics-in-publishkafka-processor-in-apache-nifi
... View more
10-09-2020
12:39 AM
1 Kudo
We can export and import flow from one NiFi Registry to another, using nifi-toolkit cli.sh. Let's say we have two c lusters, CFM-A and CFB-B, each running its own NiFi Registry. CFM-A NiFi Registry URL : http://nifi-regA:18080 CFM-B NiFi Registry URL: http://nifi-regB:18080 Export flow from CFM-A NiFi Registry to the local directory: /opt/cloudera/parcels/CFM-1.1.0.0/TOOLKIT/bin/cli.sh Type 'help' to see a list of available commands, use tab to auto-complete. 1. List buckets from CFM-A NiFi Registry: #> registry list-buckets -u http://nifi-regA:18080 # Name Id Description - ---------- ------------------------------------ ----------- 1 TestBucket 187b3d50-03ee-4e45-a717-eb113c6edbf2 (empty) 2. List flows from CFM-A NiFi Registry using bucketIdentifier : #> registry list-flows -u http://nifi-regA:18080 --bucketIdentifier 187b3d50-03ee-4e45-a717-eb113c6edbf2 # Name Id Description - ------------ ------------------------------------ ----------- 1 TailFileFlow 98ea1331-cd61-41bb-be06-0a0e85c9a275 3. Export flow locally on the file-system using flowIdentifier . This command will store flow locally under file /tmp/test.json , in Jason format, on the Node where cli.sh is running: #> registry export-flow-version -u http://nifi-regA:18080 --flowIdentifier 98ea1331-cd61-41bb-be06-0a0e85c9a275 --outputFile /tmp/test.json --outputType json Now we can Import /tmp/test.json flow to CFM-B NiFi Registry On CFM-B NiFi Registry, create an empty flow using create-flow . Create a bucket on CFM-B NiFi Registry and use the same bucket OR create a new bucket. List buckets from CFM-A NiFi Registry #> registry list-buckets -u http://nifi-regB:18080 # Name Id Description - ---- ------------------------------------ ----------- 1 TestFlow cb152ab7-d569-4dcd-b332-8ca9025c8161 (empty) 1. Create flow-name test2 under bucket TestFlow, using bucketIdentifier: #>registry create-flow -u http://nifi-regB:18080 --bucketIdentifier cb152ab7-d569-4dcd-b332-8ca9025c8161 --flowName test2 2. List flows from CFM-B NiFi Registry using bucketIdentifier : #>registry list-flows -u http://nifi-regB:18080 --bucketIdentifier cb152ab7-d569-4dcd-b332-8ca9025c8161 # Name Id Description - ---- ------------------------------------ ----------- 1 test2 ef6550d5-6306-416a-9372-81286a635e7b (empty) 3. Import flow from local file /tmp/test.json using flowIdentifier , which can be viewed above the list-flows output: #>registry import-flow-version -u http://nifi-regB:18080 --flowIdentifier ef6550d5-6306-416a-9372-81286a635e7b --input /tmp/test.json 4. Check and verify flow name and version details on CFM-B NiFi Registry URL from the Web Browser.
... View more
09-24-2020
06:38 PM
If you define JAVA_HOME in your environment, then you must use the command: export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 And then the script will pick it up from your environment. Also please check if nifi bootstrap.conf file is configured properly.
... View more
09-09-2020
12:18 AM
We can change the Hostname from ambari end as describe in the below article: https://docs.cloudera.com/HDPDocuments/Ambari-2.5.2.0/bk_ambari-administration/content/ch_changing_host_names.html
... View more
07-20-2020
06:54 PM
2 Kudos
You need to assign "modify the data" and "view the data" access policies. Also you need to grant the same policy to all your NiFi nodes as well. Please refer the below article for more details: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#component-level-access-policies
... View more
07-15-2020
12:28 AM
You can try the below curl command: curl -u admin:<password> -X GET http://<ambari_server_Host>:8080/api/v1/stacks/<Stack_Name>/versions/<Stack_Version>/services/<SERVICE_NAME> For Example: curl -u admin:<password> -X GET http://<ambari_server_Host>:8080/api/v1/stacks/ HDP/versions/3.1/services/ZOOKEEPER Or curl -u admin:<password> -X GET http://<ambari_server_Host>:8080/api/v1/stacks/HDF/versions/3.4/services/ZOOKEEPER
... View more
07-15-2020
12:21 AM
Please check if ambari-metrics-monitor is up and running on these nodes or if there is any error on metrics collector logs.
... View more
07-07-2020
12:11 AM
It seems you are facing permission issue from HDFS end: Permission denied: user=svcqhdfuser, access=EXECUTE, inode="/databank/test/from_nifi":hdfs:hdfs: Could you please check if user=svcqhdfuser have permission to access /databank/test/from_nifi in hdfs. -- Try to login as "svcqhdfuser" user: # su - svcqhdfuser -- Run the below command to confirm the same: # hdfs dfs -ls /databank/test/from_nifi -- Or try to read or write something under /databank/test/from_nifi from command line to confirm if user svcqhdfuser have required permission, if not then assign the permission and try again.
... View more
06-18-2020
08:21 PM
From the shared error stack, I can see the below error: Starting regular datanode initialization log4j:ERROR Failed to flush writer, java.io.IOException: No space left on device Please check if there is any disk space issue on these datanodes.
... View more
06-18-2020
07:40 PM
It seems, something is wrong with disks. Please check with Linux admin team to confirm if datanode disks are healthy.
... View more
06-15-2020
06:24 PM
1 Kudo
@MaurizioMR It seems, even if NIFI is standalone instance , you did set to run this is in Cluster mode by setting below property: nifi.cluster.is.node=true In this case, NIFI is set to run in cluster mode , It will go for the leader election processor and will send/create a heartbeat of and will send heartbeat to him or other nodes for processing. Hence set this property to 'false' to run the NIFI nodes in standalone mode: nifi.cluster.is.node=false
... View more
06-07-2020
10:49 PM
We can configure TailFile processor for single or multiple file and configure the Rolling Filename Pattern property , as explained in TailFile processor documentation: https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.TailFile/ https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.6.0/org.apache.nifi.processors.standard.TailFile/additionalDetails.html
... View more
06-03-2020
06:53 PM
I can see that Validation Query field is empty hence you can define a validation query (e.g. select 1). N ifi will not know if the database went down or if the connection is timeout. NiFi DB connection pool simply gave a connection to a processor. If DB goes down or connection timeout, that processor would eventually time out the connection. Hence if you have defined a Validation Query in your DBCP CS, then n ext time if processor check for a connection from DB CS, and if Validation Query is configured, the connection will be validated and if good, handed off to requesting processor and If its bad then it will try to establish a new connection again.
... View more
05-28-2020
01:38 AM
2 Kudos
If ranger database is corrupted then only option is to drop and recreate it (if we dont have backup). Refer the below article for steps to create ranger DB: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/installing-ranger/content/configuring_a_database_instance_for_ranger.html
... View more
05-28-2020
01:27 AM
Once the rolling/express upgrade stages complete, it gives the option to choose to Finalize the upgrade, to Finalize Later or to Downgrade. Finalizing later gives you a chance to perform more validation on the cluster. Downgrade moves the cluster version back to the previous version (basically: reverses the upgrade process stages). However once you finalized the upgrade, you cannot downgrade back to the previous version.
... View more
05-27-2020
01:08 AM
You can start and stop CDH services from command line. Please refer the below document for details: https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/cdh_ig_cdh_services_start.html https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/cdh_ig_services_stop.html
... View more
05-26-2020
01:37 AM
It seems that the user entry is missing in users.xml file (default location /var/lib/nifi/conf). Once these files get created then it will not be re-generated or modified if you later make any changes in any of the configuration xml files. Hence try to rename these files users.xml and authorizations.xml and restart the nifi services so that nifi can re-create these two files with correct user details and policies.
... View more
05-26-2020
01:26 AM
1 Kudo
In ambari you can create host configuration group and define the configs for those hosts instead of using default configuration by all the hosts. You can refer the below document for more details and steps: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_managing_host_configuration_groups.html
... View more
05-25-2020
09:10 PM
UnpackContent processor do not support .rar file: http://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.11.4/org.apache.nifi.processors.standard.UnpackContent/index.html Hence, yes you can use an ExecuteStreamCommand processor to invoke WinRAR command-line tool.
... View more
05-25-2020
08:50 PM
You can edit the login-identity-providers.xml present under /etc/nifi/conf and update the below property: From: <property name="Identity Strategy">USE_DN</property> To: <property name="Identity Strategy">USE_USERNAME</property> If its an Ambari managed nifi cluster the go to Ambari --> Nifi --> Configs --> Advanced nifi-login-identity-providers-env and update the above property then restart the nifi service and login with user name instead of complete CN.
... View more
05-25-2020
08:43 PM
It seems y ou have configured your RPG to connect to a NiFi instance and during TLS handshake there is no correct SAN (Subject Alternative Name) found in Nifi certificates , hence you need to add the host FQDN to the server cert as a SAN.
... View more
05-22-2020
09:42 PM
To migrate Grafana to another host, first we need to remove the Grafana service from one node and then need to re-install it on another node. We can do the below steps: 1. Take backup of Ambari Database 2. Add the component to a new Node using the following API: curl --user username:password -H 'X-Requested-By: ambari' -i -X POST http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_HOST_NAME/host_components/COMPONENTNAME For Example, to add Grafana: curl --user admin:admin -H 'X-Requested-By: ambari' -i -X POST http://<ambari-server-host>:8080/api/v1/clusters/MyLabCluster/hosts/SecondLabNode02.cloudera.com/host_components/METRICS_GRAFANA 3. Go to Ambari Hosts tab and click on the node where the Component has been added using the above API 4. The current status of the Component would be Install Pending 5. Click on Install Pending and select Re-intall to complete the installation 6. Once the above is completed, Start the service using Ambari Service UI. 7. Stop the Service using Ambari Service UI. ( Grafana service on old Host ) 8. Remove the component using the following Ambari API call: curl -u username:password -H 'X-Requested-By: ambari' -X DELETE http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/OLD_HOSTNAME/host_components/COMPONENTNAME For example, if Grafana is to be removed, curl --user admin:admin -H 'X-Requested-By: ambari' -X DELETE http://<ambari-server-host>:8080/api/v1/clusters/MyLabCluster/hosts/FirstLabNode01.cloudera.com/host_components/METRICS_GRAFANA
... View more
05-22-2020
07:59 PM
You can also try to stop and start [HDP] namenode services from command line by using below command: If you are running NameNode HA (High Availability), start the JournalNodes by executing these commands on the JournalNode host machines: su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-journalnode/../hadoop/sbin/hadoop-daemon.sh start journalnode" where $HDFS_USER is the HDFS user. For example, hdfs. Execute this command on the NameNode host machine(s): su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" If you are running NameNode HA, start the ZooKeeper Failover Controller (ZKFC) by executing the following command on all NameNode machines. The starting sequence of the ZKFCs determines which NameNode will become Active. su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start zkfc" If you are not running NameNode HA, execute the following command on the Secondary NameNode host machine. If you are running NameNode HA, the Standby NameNode takes on the role of the Secondary NameNode. su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start secondarynamenode"
... View more
05-22-2020
07:46 PM
I can not see any value defined for SSL Context Service under SiteToSiteBulletinReportingTask property, hence try after configuring ssl context service. You can refer the below article for help: https://pierrevillard.com/2017/05/13/monitoring-nifi-site2site-reporting-tasks/
... View more
05-18-2020
12:31 AM
You can use flume or nifi to publish data from kafka to nifi: a. Using flume Kafka Source -> Flume -> HDFS b. Using Nifi: Configure PublishKafka processor --> PutHdfs processor And to integrate kafka for spark streaming you need to build spark streaming job, refer the below doc. for more details: https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_spark-component-guide/content/using-spark-streaming.html
... View more