Created 01-30-2016 10:53 AM
Dear Experts,
I installed HDF (nifi-1.1.1.0-12) using a user nifi (group hadoop) under /opt/nifi-1.1.1.0-12
Starting/Stopping the nifi service from bash works fine.
Afterwards I installed the Ambari Service for Nifi as outlined on https://github.com/abajwa-hw/ambari-nifi-service
Unfortunately whenever I start Nifi under Ambari it goes down without showing an error in '/var/lib/ambari-agent/data/errors-xxxx.txt'
the stout of ambari looks like the following:
---------------------------
stdout: 2016-01-30 10:31:01,040 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485 2016-01-30 10:31:01,040 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0 2016-01-30 10:31:01,040 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-01-30 10:31:01,061 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '') 2016-01-30 10:31:01,062 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-01-30 10:31:01,082 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0') 2016-01-30 10:31:01,082 - Ensuring that hadoop has the correct symlink structure 2016-01-30 10:31:01,083 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-01-30 10:31:01,192 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485 2016-01-30 10:31:01,192 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0 2016-01-30 10:31:01,192 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-01-30 10:31:01,214 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '') 2016-01-30 10:31:01,214 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-01-30 10:31:01,235 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0') 2016-01-30 10:31:01,235 - Ensuring that hadoop has the correct symlink structure 2016-01-30 10:31:01,235 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-01-30 10:31:01,237 - Group['hadoop'] {} 2016-01-30 10:31:01,237 - Group['nifi'] {} 2016-01-30 10:31:01,238 - Group['users'] {} 2016-01-30 10:31:01,238 - User['zookeeper'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,238 - User['ams'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,239 - User['ambari-qa'] {'gid': 'hadoop', 'groups': [u'users']} 2016-01-30 10:31:01,239 - User['kafka'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,240 - User['hdfs'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,240 - User['yarn'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,241 - User['nifi'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,242 - User['mapred'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,242 - User['hbase'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 10:31:01,243 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-01-30 10:31:01,244 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-01-30 10:31:01,248 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-01-30 10:31:01,248 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'} 2016-01-30 10:31:01,249 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-01-30 10:31:01,250 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2016-01-30 10:31:01,254 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2016-01-30 10:31:01,254 - Group['hdfs'] {'ignore_failures': False} 2016-01-30 10:31:01,254 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', u'hdfs']} 2016-01-30 10:31:01,255 - Directory['/etc/hadoop'] {'mode': 0755} 2016-01-30 10:31:01,266 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-01-30 10:31:01,267 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-01-30 10:31:01,277 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-01-30 10:31:01,285 - Skipping Execute[('setenforce', '0')] due to only_if 2016-01-30 10:31:01,285 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-01-30 10:31:01,287 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'} 2016-01-30 10:31:01,288 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'} 2016-01-30 10:31:01,291 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-01-30 10:31:01,292 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-01-30 10:31:01,293 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-01-30 10:31:01,300 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'} 2016-01-30 10:31:01,301 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-01-30 10:31:01,305 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-01-30 10:31:01,308 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-01-30 10:31:01,488 - File['/opt/nifi-1.1.1.0-12/conf/nifi.properties'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'} 2016-01-30 10:31:01,491 - File['/opt/nifi-1.1.1.0-12/conf/bootstrap.conf'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'} 2016-01-30 10:31:01,495 - File['/opt/nifi-1.1.1.0-12/conf/logback.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'} 2016-01-30 10:31:01,496 - Execute['echo pid file /var/run/nifi/nifi.pid'] {} 2016-01-30 10:31:01,498 - Execute['echo JAVA_HOME=/usr/jdk64/jdk1.8.0_60'] {} 2016-01-30 10:31:01,501 - Execute['export JAVA_HOME=/usr/jdk64/jdk1.8.0_60;/opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log'] {'user': 'nifi'} 2016-01-30 10:31:04,558 - Execute['cat /opt/nifi-1.1.1.0-12/bin/nifi.pid | grep pid | sed 's/pid=\(\.*\)/\1/' > /var/run/nifi/nifi.pid'] {} 2016-01-30 10:31:04,567 - Execute['chown nifi:nifi /var/run/nifi/nifi.pid'] {} ---------------------------
Again, when I start nifi from bash it works as expacted.
Any help on how to fix this or better trace the problem is highly appreciated!
br,
Rainer
Created 01-30-2016 06:56 PM
@Rainer Geissendoerfer Others have encountered similar issues with the Nifi service (usually on Java 😎 as well but I have not been able to consistently reproduce it yet. From what I have seen of this issue, if you run the steps to start Nifi manually via CLI, it works (but for some reason not from Ambari). You can try below manual commands to start Nifi and populate the pid file so that Ambari can track its status:
su - nifi export JAVA_HOME=/usr/java/default /opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log cat /opt/nifi-1.1.1.0-12/bin/nifi.pid | grep pid | sed 's/pid=\(\.*\)/\1/' > /var/run/nifi/nifi.pid #run below as root chown nifi:nifi /var/run/nifi/nifi.pid
If you are encountering the same issue, could you provide the below details so we can try to reproduce it:
Created 01-30-2016 03:42 PM
Lets do one thing at a time. Stop the version initiated with bash. Clean up the pid and let Ambari start the service and watch for new errors. @Rainer Geissendoerfer
Created 01-30-2016 03:51 PM
I think I am messing up things here 🙂 ... I initially put the user nifi in the groop hadoop ... and did the nifi install ... as what it looks like ambari requires user "nifi" to be in group "nifi" ... I will start to change a couple of access rights and provide further feedback when it's done ...
Created 01-30-2016 05:22 PM
OK ... here is where I am ... I changed my primary group of user nifi to nifi ... i stopped nifi through "nifi.sh stop"
I cleared /var/log/nifi and /var/run/nifi
when I start nifi now from ambari there is nothing in stderr and I get the following feedback in ambari:
stderr: /var/lib/ambari-agent/data/errors-1806.txtNonestdout: /var/lib/ambari-agent/data/output-1806.txt
2016-01-30 16:31:57,093 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485 2016-01-30 16:31:57,094 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0 2016-01-30 16:31:57,094 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-01-30 16:31:57,115 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '') 2016-01-30 16:31:57,115 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-01-30 16:31:57,135 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0') 2016-01-30 16:31:57,136 - Ensuring that hadoop has the correct symlink structure 2016-01-30 16:31:57,136 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-01-30 16:31:57,244 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485 2016-01-30 16:31:57,244 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0 2016-01-30 16:31:57,245 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-01-30 16:31:57,265 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '') 2016-01-30 16:31:57,266 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-01-30 16:31:57,286 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0') 2016-01-30 16:31:57,287 - Ensuring that hadoop has the correct symlink structure 2016-01-30 16:31:57,287 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-01-30 16:31:57,288 - Group['hadoop'] {} 2016-01-30 16:31:57,289 - Group['nifi'] {} 2016-01-30 16:31:57,289 - Group['users'] {} 2016-01-30 16:31:57,289 - User['zookeeper'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,290 - User['ams'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,290 - User['ambari-qa'] {'gid': 'hadoop', 'groups': [u'users']} 2016-01-30 16:31:57,291 - User['kafka'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,291 - User['hdfs'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,292 - User['yarn'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,292 - User['nifi'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,293 - User['mapred'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,293 - User['hbase'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-01-30 16:31:57,294 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-01-30 16:31:57,295 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-01-30 16:31:57,299 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-01-30 16:31:57,299 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'} 2016-01-30 16:31:57,300 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-01-30 16:31:57,300 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2016-01-30 16:31:57,304 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2016-01-30 16:31:57,304 - Group['hdfs'] {'ignore_failures': False} 2016-01-30 16:31:57,305 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', u'hdfs']} 2016-01-30 16:31:57,305 - Directory['/etc/hadoop'] {'mode': 0755} 2016-01-30 16:31:57,317 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-01-30 16:31:57,317 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-01-30 16:31:57,328 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-01-30 16:31:57,335 - Skipping Execute[('setenforce', '0')] due to only_if 2016-01-30 16:31:57,335 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-01-30 16:31:57,337 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'} 2016-01-30 16:31:57,337 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'} 2016-01-30 16:31:57,341 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-01-30 16:31:57,342 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-01-30 16:31:57,343 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-01-30 16:31:57,350 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'} 2016-01-30 16:31:57,350 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-01-30 16:31:57,354 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-01-30 16:31:57,357 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-01-30 16:31:57,536 - File['/opt/nifi-1.1.1.0-12/conf/nifi.properties'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'} 2016-01-30 16:31:57,539 - File['/opt/nifi-1.1.1.0-12/conf/bootstrap.conf'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'} 2016-01-30 16:31:57,543 - File['/opt/nifi-1.1.1.0-12/conf/logback.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'} 2016-01-30 16:31:57,543 - Execute['echo pid file /var/run/nifi/nifi.pid'] {} 2016-01-30 16:31:57,546 - Execute['echo JAVA_HOME=/usr/jdk64/jdk1.8.0_60'] {} 2016-01-30 16:31:57,549 - Execute['export JAVA_HOME=/usr/jdk64/jdk1.8.0_60;/opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log'] {'user': 'nifi'} 2016-01-30 16:32:00,606 - Execute['cat /opt/nifi-1.1.1.0-12/bin/nifi.pid | grep pid | sed 's/pid=\(\.*\)/\1/' > /var/run/nifi/nifi.pid'] {} 2016-01-30 16:32:00,625 - Execute['chown nifi:nifi /var/run/nifi/nifi.pid'] {}
Still though the service goes down after some green blinking ... when I am going to bash there is also no nifi service running ...
in /var/log/nifi there are 4 log files
nifi-app.log nifi-bootstrap.log nifi-setup.log nifi-user.log
---------------------------------------------------
[nifi@nifi1n1 nifi]$ cat /var/log/nifi/*
2016-01-30 17:10:40,677 INFO [main] org.apache.nifi.bootstrap.RunNiFi No Bootstrap Notification Services configured.
2016-01-30 17:10:40,679 INFO [main] org.apache.nifi.bootstrap.Command Apache NiFi is not running
Java home: /usr/jdk64/jdk1.8.0_60
NiFi home: /opt/nifi-1.1.1.0-12
Bootstrap Config File: /opt/nifi-1.1.1.0-12/conf/bootstrap.conf
[nifi@nifi1n1 nifi]$
----------------------------------------------------------
/var/run/nifi Directory is empty
Access Rights are set appropriately for /var/log/nifi and /var/run/nifi
[nifi@nifi1n1 log]$ ls -lisa /var/log | grep nifi
8621255 0 drwxr-xr-x. 2 nifinifi 91 Jan 30 17:10 nifi
[nifi@nifi1n1 run]$ cd /var/run
[nifi@nifi1n1 run]$ ls -lisa | grep nifi
62714 0 drwxr-xr-x. 2 nifi hadoop 40 Jan 30 17:11 nifi
if I execute
export JAVA_HOME=/usr/jdk64/jdk1.8.0_60;/opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log
from bash the service goes up.
... and I am getting crazy 🙂
Any Advise???
Created 01-30-2016 05:53 PM
You can try this
Delete the service
run this in ambari server
curl --user admin:admin -i -H "X-Requested-By: ambari" -X DELETE http://`hostname -f`:8080/api/v1/clusters/clustername/services/NIFI
cleanup everything under /var/log/nifi and /var/run/nifi
and try to add it back using Ambari after making sure that nifi directory exists under /var/log and /var/run with proper permissions
or
Wait for the official release of Amabri Service. 🙂
Created 01-30-2016 06:04 PM
@Rainer Geissendoerfer You may have to run this before deleting in case delete complains
In Ambari server
curl --user admin:admin -i -H "X-Requested-By: ambari" -X PUT http://`hostname -f`:8080/api/v1/clusters/HDPTEST/services/NIFI -d '{"RequestInfo": {"context" :"Stop NIFI via REST"}, "Body": {"ServiceInfo": {"state": "INSTALLED"}}}'
Created 01-30-2016 05:53 PM
What group does /var/run/nifi belong to? @Rainer Geissendoerfer I suggest doing everything from scratch only with Ambari service and new copy of sandbox
Created 01-30-2016 06:56 PM
@Rainer Geissendoerfer Others have encountered similar issues with the Nifi service (usually on Java 😎 as well but I have not been able to consistently reproduce it yet. From what I have seen of this issue, if you run the steps to start Nifi manually via CLI, it works (but for some reason not from Ambari). You can try below manual commands to start Nifi and populate the pid file so that Ambari can track its status:
su - nifi export JAVA_HOME=/usr/java/default /opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log cat /opt/nifi-1.1.1.0-12/bin/nifi.pid | grep pid | sed 's/pid=\(\.*\)/\1/' > /var/run/nifi/nifi.pid #run below as root chown nifi:nifi /var/run/nifi/nifi.pid
If you are encountering the same issue, could you provide the below details so we can try to reproduce it:
Created 01-31-2016 06:20 PM
I've experienced the same issue on three different installs, Centos 6 & 6.5, Java 8, HDP 2.3.2 & 2.3.4, sandbox and cluster. On the clusters I was installing NiFi on a separate node.
Created 02-01-2016 07:15 AM
@Henry Sowell could you check that on the node where Nifi is setup, the java location Ambari is starting Nifi with (usually /usr/java/default) exists and has the appropriate version of java?
Created 02-01-2016 12:53 AM
I am experiencing the same issue in my cluster. I chose to stick with manual setup and wait for the offical release of ambari service.