Member since
06-26-2017
191
Posts
10
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
724 | 09-22-2017 07:13 PM |
02-12-2019
06:12 PM
@Nixon Rodrigues I have tried adding ranger policies for atlas service however there is something I am doing wrong, probably I will ask as a different question. Thanks a lot and appreciate your help. Dhieru
... View more
02-11-2019
07:07 PM
@Nixon Rodrigues After adding the user in atlas-simple-authz-policy.json and chnaging the property atlas.authorizer.impl = simple worked from me it is working now. Thanks a lot for your help. However please do explain how it worked because I am confused after searching a lot in the community. Thanks dhieru
... View more
02-11-2019
06:59 PM
@Nixon Rodrigues Thanks a lot for the help yes it did help I am using HDP3.1 sandbox. I went ahead and added the user holger_gov in the userRoles section as shown below however I am not able to see the metadata of hive tables and lineage in Atlas UI. The Atlas UI looks like below In addition when I searched in this community, they advised to run hive-import.sh, when I did that, I receive the following error using "holger_gov" as user using "admin" as user Is this related to ranger? when I to hdfs dfs -ls /user I do not see holger_gov user, but then admin is there so admin should work. Do I need to change the below property from ambari ? from ranger to simple atlas.authorizer.impl = simple? at present it is ranger I am confused now apologies for long winded question Thanks for your help!
... View more
02-11-2019
03:24 PM
@Nixon Rodrigues I was able to add the user holger_gov in the users-credentials.properties file. For users having the same issue, they need to edit the users-credentials.properties file using vi or any editor holger_gov=<ROLE YOU want to give>::<sha256sum of the password> in order to generate the sha256sum use the below command in *nix terminal echo -n "yourpassword" | sha256sum it will give you some output like below 4d20573d20756b4b2cd80e41def04b52907710000b038f0f901d4b568e254fc6 - copy the sum till 6 leave out the space and - (dash) and paste it in the users-credentials.properties files. Restart you ATLAS server from ambari, and it will work Thanks again to @Nixon Rodrigues Another question? Do I also need to cretae and add policy-store.txt , as of now I do not have any such file in the directory /usr/hdp/current/atlas-server/conf/. Thanks Dhieru
... View more
02-11-2019
02:33 PM
@Nixon Rodrigues Thanks for the response, appreciate your help No there is no holger_gov in the users-credentails.properties file. Attached is the image how do I add the user holger_gov in here?
... View more
02-06-2019
09:41 PM
Hi All, I am working on HDP sandbox and trying to login into ATLAS UI with user holger_gov and same as password however it give me invalid credentials. Looked for a solution on this community however it did not fix my issue chnaged the authorizer to simple and restarted but did not work so changed it back to ranger and tried again. I see that atlas user is in the OS shown in screenshots Some of the screenshots detailing my situation are shown below please need any help Thanks Dhieru
... View more
Labels:
- Labels:
-
Apache Atlas
11-02-2018
10:45 PM
Hi All, I installed winlogbeat on windows, and wanted to ship logs using ListenBeats in nifi. However when I check output of winlogbeat there is nothing to stream to tcp or udp.Only options are Elasticsearch ,Logstash, Kafka, Redis, File, Console, Cloud. Any way to stream to tcp port so that listenbeats can listen on the port. Thanks in advance dhieru
... View more
Labels:
- Labels:
-
Apache NiFi
11-02-2018
10:40 PM
@Jake Simmonds I have installed winlogbeats however I am not able to output to tcp or udp so that listenbeats could listen to it. Can you help how you achieved it, what output you configured? Thanks in advance and appreciate your help Dhiren
... View more
07-30-2018
06:50 PM
@dbains thanks a lot appreciate it, face palm should have read the documentation before asking the questions apologies
... View more
07-30-2018
02:28 PM
Hi All,
Thanks a lot for all the help.
Is there any way to reset kafka offsets from CLI. This link resets all the topics
https://stackoverflow.com/questions/45670937/kafka-0-11-how-to-reset-offsets Thanks Dhieru
... View more
Labels:
- Labels:
-
Apache Kafka
07-11-2018
07:37 PM
make sure that you install only Nifi on these nodes. Do not club Kafka and zookeeper on nifi nodes. This will imporve network i/o performance. Dhieru
... View more
06-22-2018
03:36 PM
@Bryan Bende Thanks a lot, appreciate it!
... View more
06-22-2018
02:53 PM
Hi All, I am trying to tune my publish kafka processor and want to add custom properties such as batch.size and linger.ms however I do not see any mention of them in the source code of the processor. So if I add these properties will the publishkafka processor honor it and use it when publishing? How wil it work? Thanks Dhiren
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
06-22-2018
02:48 PM
@anarasimham Thanks for the response, so if I add some custom properties such as linger.ms batch.size buffer.memory will the publish kafka processor honor these properties, will use it? I checked the source code of publish kafka processor but I did not find any mention of these properties. Again Thanks Dhieru
... View more
06-20-2018
09:44 PM
Hi All, Thanks a lot to this awesome community. My use case is like as follows We use listentcp porcessor to listen to small firewall events max size 200 bytes. But we batch them at Listentcp level to increase network I/O and performance. Then we split them into individual flow files and do some processing and enrichment and publish them to Kafka. However, our kafka is not able to keep up with rate of flow. We have millions of flowfiles queued up. I read that kafka can publish millions of small message per second. What all properties do I need o configure in publish kafka so to increas its performane. I tried tuning max.request.size however it is the maximum size of each message so no help I tried adding one more property buffer.memory to buffer small message together and then publish still no help should I also add one more proerty called batch.size (controls how many bytes of data to collect before sending messages to the Kafka broker. Set this as high as possible, without exceeding available memory. The default value is 16384.) and linger.ms (linger.ms sets the maximum time to buffer data in asynchronous mode. For example, a setting of 100 batches 100ms of messages to send at once. This improves throughput, but the buffering adds message delivery latency.) In my opinion I should buffer for all the small messages atleast (100000 messages ) then write to kafka topic, this will increase the netwrok I/o as well less writes. I just not sure which properties will help me here. Thanks Dhieru
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
06-08-2018
09:26 PM
Hi All, We have NiFi 1.2 installed as part of the HDF managed thru Ambari added INFO level to debug an issue, I made a change in NiFi logback.xml using Ambari, do I need to restart the NiFi nodes. I read that any change in logback.xml is automatically picked up in 30 seconds, is this only when you directly change on NiFi node and not when we use Ambari to make the change? Thanks Dhireu
... View more
Labels:
- Labels:
-
Apache NiFi
06-07-2018
02:10 PM
@Matt Clarke Thanks that makes sense, anny idea if it is fixed in nifi1.6 because I checked the link it says "Resolved", I read it again, it will be fixed in nifi.17 Appreciate your help Dhieru
... View more
06-06-2018
10:37 PM
@Wynner it got fixed, however when data itself contains a new line character then it is giving a problem, should I change the message demaracter from new line(shift+enter) to something else ?
... View more
06-06-2018
10:10 PM
Hi All, Thanks a lot to this awesome community. I observing a weird behavior in my prod template. Someone stopped the publishkafka processor and thus the queue backed up, I got a notification and tried to find out who stopped it from global menu --> flow configuration history --> filter on uuid of that processor. But it did not show who stopped it? What happened? I am clueless, for auditng purpose it is important. atatched is the image Thanks a lot Dhieru
... View more
Labels:
- Labels:
-
Apache NiFi
06-03-2018
08:37 PM
@Wynner Thanks a lot for looking at the question. I have not set the message demaractor on both publish and consume kafka processors. Also when I describe my kafka topics it is "Topic:test PartitionCount:1 ReplicationFactor:1" Also the number of concurrent tasks for Consume kafka and publish kafka is 1. Thanks a lot Dhieru
... View more
06-01-2018
06:47 PM
@Felix Albani apologies for not making it clear. I am publising a multi line message using publish kafka processor the message is something like below: when i consume it using consume kafka messages are broken apart. any help Thanks <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54567625-5078-4554-A5BA-3E3B0328C30D}'/><EventID>4672</EventID><Version>0</Version><Level>0</Level><Task>12548</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2008-05-23T18:33:23.071073900Z'/><EventRecordID>510676945</EventRecordID><Correlation/><Execution ProcessID='652' ThreadID='4792'/><Channel>Security</Channel><Computer>prdserver.mycomapny.org</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-5-21-442726818-4567565561-3997648070-3159</Data><Data Name='SubjectUserName'>app</Data><Data Name='SubjectDomainName'>ADmycomINT</Data><Data Name='SubjectLogonId'>0abfbd2cd</Data><Data Name='PrivilegeList'>SeSecurityPrivilege
SeBackupPrivilege
SeRestorePrivilege
SeTakeOwnershipPrivilege
SeDebugPrivilege
SeSystemEnvironmentPrivilege
SeLoadDriverPrivilege
SeImpersonatePrivilege</Data></EventData></Event><br>
... View more
05-31-2018
02:33 PM
Is there anyone else who has faced this issue?
... View more
05-23-2018
10:51 PM
Hi, Thanks a lot to this awesome community. I am listening on a tcp port and sending it publishkafka processor for publishing. The original message I send is attached below The nifi template is also shown in picture below. When I consume using consume kafka the multi message are split. See the image. I am using nifi1.2 and using kafka version 10 Please help Thanks a lot
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache NiFi
04-16-2018
08:48 PM
@Jerry O'Donovan port 8080 must be used for some other process?
... View more
04-11-2018
07:45 PM
Hi, Thanks a lot to this awesome community. We are trying to upgrade NiFi cluster in PROD. Our Ambari version is 2.5.1 in HDF and NiFi is 1.2 we are upgrading to NiFi 1.5 and HDF 3.1.1 I understand we will have to stop all the flows in NiFi. and then upgrade it. 1.How long will it take to upgrade (on an average). 2. In addition when we upgrade to new nifi 1.5 , do we to individually upgrade all the processors? Thanks Dhiren
... View more
Labels:
03-20-2018
02:24 PM
@Abdelkrim Hadjidj Thanks a lot appreciate you help!
... View more
03-19-2018
07:42 PM
Hi All, Thanks a lot to this awesome community. I have use case for which I need to monitor number of garbage collections in NiFi. this linkks shows how to enable GC logging by bootstrap.conf change. Is there any way I can sent GC collection info thru REST API to Ambari Metric collector system? https://community.hortonworks.com/questions/105577/nifi-how-to-turn-on-gc-logging-for-nifi.html Thanks a lot. Dhieru
... View more
Labels:
- Labels:
-
Apache NiFi
02-16-2018
08:05 PM
@kishore sanchina paste your nifi app log after running this command cat nifi-app.log | grep -C 10 "ERROR PublishKafka"
... View more
02-14-2018
01:44 PM
@Pranav Singhania have you tried hdfs dfs -get <hdfs path> <localpath>?
... View more