Member since
05-10-2016
303
Posts
35
Kudos Received
0
Solutions
08-07-2017
01:59 PM
@mayki wogno
Have you tried refreshing your browser cache? Maybe it has the token from the first URL, https://hostname/nifi.
... View more
04-12-2017
04:40 AM
Generally the update from older version to newer version(HDP 2.6) should do this process. Im not sure of it as i havent tested this scenario. I will update my findings soon. But in the successful upgradation case, this error will be handled.
... View more
03-24-2017
02:25 PM
Try changing the values to a very small number from their defaults: autopurge.purgeInterval=1
autopurge.snapRetainCount=3 A restart of zookeeper (In your case Nifi) will be needed for changes to take affect.
... View more
03-17-2017
02:09 PM
@Matt, thanks All nodes in my cluster have been granted "modify the data", i've already succesfully empty queue. Next time, I'm trying expiration time.
... View more
03-16-2017
04:01 PM
@Matt Clarke : thanks 🙂
... View more
03-15-2017
01:05 PM
@mayki wogno Same question as this thread:
https://community.hortonworks.com/questions/88962/nifi-processor-not-the-most-up-to-date.html
... View more
03-15-2017
12:31 PM
@mayki wogno Are you issuing commands against the rest api or are you trying to make a change within the UI when this occurs? Sounds like multiple changes being made against the same component at the same time. Each component has a revision number so that two people can't make changes to the exact same component at the same time. So when the second change is applied using the same revision as the first request which was successful, you get these responses. Two ways this can occur... 1. Two authenticated users are making a change to the configuration of the same processor. User 1 hits apply and that change is applied. User 2 then hits apply and a conflict response occurs from the first node that receives the request. 2. Multiple rest api call are being made against the same component without updating the revision number in the subsequent rest api calls. As far as node going down.... Do you mean you lose the api and have to refresh the browser? or does the cluster completely go down forcing you to restart nodes to get nodes to rejoin cluster? Thanks, Matt
... View more
03-13-2017
04:25 PM
1 Kudo
This is a known issue when there are multiple processors with different principals, the JIRA is here and just got merged to master: https://issues.apache.org/jira/browse/NIFI-3520
... View more
05-27-2019
09:48 AM
@Sumit Das in my case, the problem was that Hive was not properly configured to support streaming. Basically transactions must be enabled but some others properties must be set as well. More info here: https://community.hortonworks.com/articles/49949/test-7.html The table must also respect some conditions (stored as ORC, transactional, bucketed).
... View more
03-10-2017
10:36 PM
@mayki wogno Just try to see if by deleting oozie auth token, if it helps you. rm ~/.oozie-auth-token
... View more