Member since
06-06-2016
23
Posts
13
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1060 | 09-09-2017 02:34 AM | |
3369 | 10-03-2016 06:51 AM | |
2301 | 08-11-2016 03:00 AM |
11-13-2017
03:20 AM
@Kiem Nguyen Can you set the policy on the process group itself? 1. Login as admin 2. Select the process group and click on the "Operate" controls on the left. 3. Click on the keys icon. 4. And remove the user_A present under "View the component" and "Modify the component" Also make sure that the sub components are not overriding any parent policies. What I mean to say if the sub components have their own policies, then they do not inherit the policies of the parent component.
... View more
09-19-2017
08:49 AM
1 Kudo
@Roshan Dissanayake Can you please show the configuration of publishkafka reader and writer CS? This looks to be an issue while setting the attributes of the flowfile when it is being sent to retrieve the Schema from registry.
... View more
09-18-2017
06:39 AM
3 Kudos
@Simon Jespersen This looks like an authentication issue. For the given topic can you add ACLs for anonymous user as the protocol is PLAINTEXT? bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=<zookeeper:host> --add --allow-principal User:ANONYMOUS --operation Read --operation Write --operation Describe --topic <topic>
... View more
09-09-2017
02:34 AM
Have you tried using the kafkaProcessors already present in NiFi? https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-0-9-nar/1.3.0/org.apache.nifi.processors.kafka.pubsub.PublishKafka/index.html https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-0-10-nar/1.3.0/org.apache.nifi.processors.kafka.pubsub.PublishKafka_0_10/index.html https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-kafka-0-10-nar/1.3.0/org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_0_10/index.html
... View more
07-05-2017
06:32 AM
1 Kudo
@siva karna Why not use the wait and notify processor? Redirect the original file to wait processor from the splitJson processor. In the wait processor set the Target Signal Count to ${fragment.count}. Set the Release Signal Identifier in both the notify and wait processor to ${fragment.identifier} Now start the flow. The SplitJson will split the flow file and redirect the original flow file to the wait processor. It will wait there until the target signal count i.e in this case fragment.count number of fragements have been notified by the notify processor. Please try to construct a flow like below Hope this helps
... View more
05-23-2017
03:22 AM
Hey Can you please tell me which version of NiFi are you using? Also you made the required changes to hive to enable streaming support?
... View more
03-11-2017
07:20 AM
4 Kudos
In this article we will be creating a flow to
read files from hdfs and insert the same into hive using the putHiveStreaming
processor. Before going to NiFi we need update some configurations in Hive. To enable Hive streaming we need to update the following properties
hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager hive.compactor.initiator.on = true hive.compactor.worker.threads > 0 Coming to NiFi we will be making use of the following processors :
1.ListHdfs + FetchHdfs processor – While configuring the List and Fetch HDFS processors we need to make sure that both these processors run on the primary node only so that the flow files are not duplicated across nodes 2.Convert Json to Avro processor – PutHiveStreaming processor supports input in the Avro format only. So any Json input needs to be converted to avro format 3.PutHiveStreaming processor Lets construct the Nifi flow as below : ListHDFS--> FetchHDFS--> ConvertJsonToAvro-->PutHiveStreaming Configuring the PutHiveStreaming processor Set the values for the above as follows The Hive meta store Uri --- Should be of
the format thrift://<Hive Metastore host>:9083. Note that hive meta store
host is not the same as the hive server host. Hive Configuration Resources – Paths to Hadoop
and hive configuration files. We need to copy the Hadoop and hive configuration
files i.e. Hadoop-site.xml, core-site.xml and hive-site.xml to all the NiFi
hosts. Database Name – the database to which you
want to connect Table name – Table name in which you
want to insert the data. Again note that the
a.ORC is the only format supported
currently. So your table must have "stored as orc" b.transactional = "true" should
be set in the table create statement c.Bucketed but not sorted. So your table
must have "clustered by (colName) into (n) buckets" Auto-create partitions – If set to true hive
partitions will be auto created Kerberos Principal – The Kerberos principal
name Kerberos keytab – the path to the Kerberos
keytab This completes the configuration part. Now we can start the processors to insert data into hive from hdfs.
... View more
Labels:
03-10-2017
03:07 AM
Hi Sunil The default port for thrift is 10000. Can you please check the port Number in your HDP setup? The document you added says "10001" its an assumption. https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2 says the following "HIVE_SERVER2_THRIFT_PORT – Optional TCP port number to listen on, default 10000. Overrides the configuration file setting." So please try changing the port number from 10001 to 10000 in JDBC url. Thanks Mahesh Nayak Kalyanpur
... View more
10-20-2016
02:40 PM
@Saikrishna Tarapareddy GSS initiate failed exception simply means that the provided credentials are incorrect. You must have kerberised the cluster with a principal/keytab combination. Please provide that as the kerberos principa;/keytab. The credentials that you have provided are that of the hive service.
... View more
10-20-2016
04:06 AM
Hey You are getting a gss initiate fail error. Which means nifi is unable to connect to hive using kerberos . Please note that you have not provided the kerberos credentials. Please try after providing that
... View more