Member since
01-27-2016
46
Posts
40
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1921 | 10-24-2016 05:39 PM | |
1535 | 03-30-2016 06:02 PM | |
2512 | 02-28-2016 04:37 PM | |
4838 | 02-07-2016 07:57 AM |
02-05-2016
05:31 PM
1 Kudo
After a cluster restart it worked ... will continue to trace why that happened ...
... View more
02-05-2016
04:01 PM
1 Kudo
Dear All, I am facing some strange behaviour. My setup: 3 Node Yarn Cluster (8GB Memory each Node / CentOS7 / AWS Installation) When firing up a simple M/R Job through Pig, the application started with 2 containers but is "pending" getting the 3rd Container from the Node Manager. When I look at the 3 Nodes in Unix I see the following: My Mappers are setup to consume 1,5 GB Containers, My Reducers to consume 2GB As what I can see immediately, is that there obviously is no free space available for starting the 3rd container, but there is lots of space available in the buff/cache that I was hoping is used by yarn too for firing up containers. Am I missing something? Br, Rainer
... View more
Labels:
- Labels:
-
Apache YARN
02-03-2016
12:43 PM
1 Kudo
Thanks Neeraj, that worked. If I am now creating a table in hive, it has the owner "admin" and group "hdfs" on hdfs. As what I understand, my Ambari-Server is initialized by user "root" and group "root" What are the mechanics behind this user transfer and how can I manipulate it e.g. if want to have another owner/group than "admin/hdfs" on hdfs for my freshly created table? best regards, Rainer
... View more
02-03-2016
10:25 AM
1 Kudo
Dear All, whenever I launch the hive view I get the following error: ----------------
H060 Unable to open Hive session: org.apache.thrift.protocol.TProtocolException: Required field 'serverProtocolVersion' is unset! Struct:TOpenSessionResp(status:TStatus(statusCode:ERROR_STATUS, infoMessages:[*org.apache.hive.service.cli.HiveSQLException:Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: hive from IP 10.0.202.157:13:12, org.apache.hive.service.cli.session.SessionManager:openSession:SessionManager.java:266, org.apache.hive.service.cli.CLIService:openSessionWithImpersonation:CLIService.java:202, org.apache.hive.service.cli.thrift.ThriftCLIService:getSessionHandle:ThriftCLIService.java:402, org.apache.hive.service.cli.thrift.ThriftCLIService:OpenSession:ThriftCLIService.java:297, org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession:getResult:TCLIService.java:1253, org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession:getResult:TCLIService.java:1238, org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39, org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39, org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56, org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285, java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142, java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617, java.lang.Thread:run:Thread.java:745, *java.lang.RuntimeException:java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: hive from IP 10.0.202.157:21:8, org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:83, org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36, org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63, java.security.AccessController:doPrivileged:AccessController.java:-2, javax.security.auth.Subject:doAs:Subject.java:422, org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1657, org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59, com.sun.proxy.$Proxy20:open::-1, org.apache.hive.service.cli.session.SessionManager:openSession:SessionManager.java:258, *java.lang.RuntimeException:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: hive from IP 10.0.202.157:26:5, org.apache.hadoop.hive.ql.session.SessionState:start:SessionState.java:494, org.apache.hive.service.cli.session.HiveSessionImpl:open:HiveSessionImpl.java:137, sun.reflect.GeneratedMethodAccessor11:invoke::-1, sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43, java.lang.reflect.Method:invoke:Method.java:497, org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78, *org.apache.hadoop.ipc.RemoteException:Unauthorized connection for super-user: hive from IP 10.0.202.157:45:19, org.apache.hadoop.ipc.Client:call:Client.java:1427, org.apache.hadoop.ipc.Client:call:Client.java:1358, org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker:invoke:ProtobufRpcEngine.java:229, com.sun.proxy.$Proxy15:getFileInfo::-1, org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB:getFileInfo:ClientNamenodeProtocolTranslatorPB.java:771, sun.reflect.GeneratedMethodAccessor7:invoke::-1, sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43, java.lang.reflect.Method:invoke:Method.java:497, org.apache.hadoop.io.retry.RetryInvocationHandler:invokeMethod:RetryInvocationHandler.java:252, org.apache.hadoop.io.retry.RetryInvocationHandler:invoke:RetryInvocationHandler.java:104, com.sun.proxy.$Proxy16:getFileInfo::-1, org.apache.hadoop.hdfs.DFSClient:getFileInfo:DFSClient.java:2116, org.apache.hadoop.hdfs.DistributedFileSystem$22:doCall:DistributedFileSystem.java:1315, org.apache.hadoop.hdfs.DistributedFileSystem$22:doCall:DistributedFileSystem.java:1311, org.apache.hadoop.fs.FileSystemLinkResolver:resolve:FileSystemLinkResolver.java:81, org.apache.hadoop.hdfs.DistributedFileSystem:getFileStatus:DistributedFileSystem.java:1311, org.apache.hadoop.fs.FileSystem:exists:FileSystem.java:1424, org.apache.hadoop.hive.ql.session.SessionState:createRootHDFSDir:SessionState.java:568, org.apache.hadoop.hive.ql.session.SessionState:createSessionDirs:SessionState.java:526, org.apache.hadoop.hive.ql.session.SessionState:start:SessionState.java:480], errorCode:0, errorMessage:Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: hive from IP 10.0.202.157), serverProtocolVersion:null) ----------------------- I have the following proxy users in custom core-site: hadoop.proxyuser.hcat.groups = * hadoop.proxyuser.hcat.hosts = *hadoop.proxyuser.hdfs.groups = *hadoop.proxyuser.hdfs.hosts = *hadoop.proxyuser.hive.groups = *hadoop.proxyuser.hive.hosts = *hadoop.proxyuser.root.groups = *hadoop.proxyuser.root.hosts = *Also I granted permission to use the Hive View to "admin" in Ambari.What am I missing?Any support is appreciated!br,Rainer
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
01-30-2016
05:22 PM
1 Kudo
OK ... here is where I am ... I changed my primary group of user nifi to nifi ... i stopped nifi through "nifi.sh stop" I cleared /var/log/nifi and /var/run/nifi when I start nifi now from ambari there is nothing in stderr and I get the following feedback in ambari: stderr: /var/lib/ambari-agent/data/errors-1806.txt None stdout: /var/lib/ambari-agent/data/output-1806.txt 2016-01-30 16:31:57,093 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485
2016-01-30 16:31:57,094 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-01-30 16:31:57,094 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-01-30 16:31:57,115 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-01-30 16:31:57,115 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-01-30 16:31:57,135 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-01-30 16:31:57,136 - Ensuring that hadoop has the correct symlink structure
2016-01-30 16:31:57,136 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-01-30 16:31:57,244 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485
2016-01-30 16:31:57,244 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-01-30 16:31:57,245 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-01-30 16:31:57,265 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-01-30 16:31:57,266 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-01-30 16:31:57,286 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-01-30 16:31:57,287 - Ensuring that hadoop has the correct symlink structure
2016-01-30 16:31:57,287 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-01-30 16:31:57,288 - Group['hadoop'] {}
2016-01-30 16:31:57,289 - Group['nifi'] {}
2016-01-30 16:31:57,289 - Group['users'] {}
2016-01-30 16:31:57,289 - User['zookeeper'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,290 - User['ams'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,290 - User['ambari-qa'] {'gid': 'hadoop', 'groups': [u'users']}
2016-01-30 16:31:57,291 - User['kafka'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,291 - User['hdfs'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,292 - User['yarn'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,292 - User['nifi'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,293 - User['mapred'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,293 - User['hbase'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 16:31:57,294 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-01-30 16:31:57,295 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-01-30 16:31:57,299 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-01-30 16:31:57,299 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-01-30 16:31:57,300 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-01-30 16:31:57,300 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-01-30 16:31:57,304 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-01-30 16:31:57,304 - Group['hdfs'] {'ignore_failures': False}
2016-01-30 16:31:57,305 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', u'hdfs']}
2016-01-30 16:31:57,305 - Directory['/etc/hadoop'] {'mode': 0755}
2016-01-30 16:31:57,317 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-01-30 16:31:57,317 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-01-30 16:31:57,328 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-01-30 16:31:57,335 - Skipping Execute[('setenforce', '0')] due to only_if
2016-01-30 16:31:57,335 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-01-30 16:31:57,337 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-01-30 16:31:57,337 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-01-30 16:31:57,341 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-01-30 16:31:57,342 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-01-30 16:31:57,343 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-01-30 16:31:57,350 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-01-30 16:31:57,350 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-01-30 16:31:57,354 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-01-30 16:31:57,357 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-01-30 16:31:57,536 - File['/opt/nifi-1.1.1.0-12/conf/nifi.properties'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'}
2016-01-30 16:31:57,539 - File['/opt/nifi-1.1.1.0-12/conf/bootstrap.conf'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'}
2016-01-30 16:31:57,543 - File['/opt/nifi-1.1.1.0-12/conf/logback.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'}
2016-01-30 16:31:57,543 - Execute['echo pid file /var/run/nifi/nifi.pid'] {}
2016-01-30 16:31:57,546 - Execute['echo JAVA_HOME=/usr/jdk64/jdk1.8.0_60'] {}
2016-01-30 16:31:57,549 - Execute['export JAVA_HOME=/usr/jdk64/jdk1.8.0_60;/opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log'] {'user': 'nifi'}
2016-01-30 16:32:00,606 - Execute['cat /opt/nifi-1.1.1.0-12/bin/nifi.pid | grep pid | sed 's/pid=\(\.*\)/\1/' > /var/run/nifi/nifi.pid'] {}
2016-01-30 16:32:00,625 - Execute['chown nifi:nifi /var/run/nifi/nifi.pid'] {} Still though the service goes down after some green blinking ... when I am going to bash there is also no nifi service running ... in /var/log/nifi there are 4 log files nifi-app.log nifi-bootstrap.log nifi-setup.log nifi-user.log --------------------------------------------------- [nifi@nifi1n1 nifi]$ cat /var/log/nifi/* 2016-01-30 17:10:40,677 INFO [main] org.apache.nifi.bootstrap.RunNiFi No Bootstrap Notification Services configured. 2016-01-30 17:10:40,679 INFO [main] org.apache.nifi.bootstrap.Command Apache NiFi is not running Java home: /usr/jdk64/jdk1.8.0_60 NiFi home: /opt/nifi-1.1.1.0-12 Bootstrap Config File: /opt/nifi-1.1.1.0-12/conf/bootstrap.conf [nifi@nifi1n1 nifi]$ ---------------------------------------------------------- /var/run/nifi Directory is empty Access Rights are set appropriately for /var/log/nifi and /var/run/nifi [nifi@nifi1n1 log]$ ls -lisa /var/log | grep nifi 8621255 0 drwxr-xr-x. 2 nifinifi 91 Jan 30 17:10 nifi
[nifi@nifi1n1 run]$ cd /var/run [nifi@nifi1n1 run]$ ls -lisa | grep nifi 62714 0 drwxr-xr-x. 2 nifi hadoop 40 Jan 30 17:11 nifi if I execute export JAVA_HOME=/usr/jdk64/jdk1.8.0_60;/opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log from bash the service goes up. ... and I am getting crazy 🙂 Any Advise???
... View more
01-30-2016
03:51 PM
1 Kudo
I think I am messing up things here 🙂 ... I initially put the user nifi in the groop hadoop ... and did the nifi install ... as what it looks like ambari requires user "nifi" to be in group "nifi" ... I will start to change a couple of access rights and provide further feedback when it's done ...
... View more
01-30-2016
03:40 PM
1 Kudo
I did that already in order to trick Ambari after starting nifi through bash (and then I inserted a valid PID in /var/run/nifi) and restarted the service through Amari. Obviously Ambari found that PID and got green ... but still no luck starting the service through Ambari 😞
... View more
01-30-2016
03:05 PM
1 Kudo
Hi Neeray, thanks for your feedback ... I am now getting more detailed logs ... even though the problem didn't go aways, please see above. Thanks, Rainer
... View more
01-30-2016
03:04 PM
1 Kudo
Based on Neerjas comment below I enabled user nifi to write into /var/log/ ... no at least I am getting an error in the Ambari Console when starting the service that I attached in errors-1795.txt. In addition I posted the logs of /var/log/nifi. Again, when I am starting "nifi.sh start" from bash, nifi starts successfully. Thanks, in advance, for any further advise. br, Rainer
... View more
01-30-2016
10:53 AM
2 Kudos
Dear Experts, I installed HDF (nifi-1.1.1.0-12) using a user nifi (group hadoop) under /opt/nifi-1.1.1.0-12 Starting/Stopping the nifi service from bash works fine. Afterwards I installed the Ambari Service for Nifi as outlined on https://github.com/abajwa-hw/ambari-nifi-service Unfortunately whenever I start Nifi under Ambari it goes down without showing an error in '/var/lib/ambari-agent/data/errors-xxxx.txt' the stout of ambari looks like the following: --------------------------- stdout:
2016-01-30 10:31:01,040 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485
2016-01-30 10:31:01,040 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-01-30 10:31:01,040 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-01-30 10:31:01,061 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-01-30 10:31:01,062 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-01-30 10:31:01,082 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-01-30 10:31:01,082 - Ensuring that hadoop has the correct symlink structure
2016-01-30 10:31:01,083 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-01-30 10:31:01,192 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.4.0-3485
2016-01-30 10:31:01,192 - Checking if need to create versioned conf dir /etc/hadoop/2.3.4.0-3485/0
2016-01-30 10:31:01,192 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-01-30 10:31:01,214 - call returned (1, '/etc/hadoop/2.3.4.0-3485/0 exist already', '')
2016-01-30 10:31:01,214 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.4.0-3485 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-01-30 10:31:01,235 - checked_call returned (0, '/usr/hdp/2.3.4.0-3485/hadoop/conf -> /etc/hadoop/2.3.4.0-3485/0')
2016-01-30 10:31:01,235 - Ensuring that hadoop has the correct symlink structure
2016-01-30 10:31:01,235 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-01-30 10:31:01,237 - Group['hadoop'] {}
2016-01-30 10:31:01,237 - Group['nifi'] {}
2016-01-30 10:31:01,238 - Group['users'] {}
2016-01-30 10:31:01,238 - User['zookeeper'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,238 - User['ams'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,239 - User['ambari-qa'] {'gid': 'hadoop', 'groups': [u'users']}
2016-01-30 10:31:01,239 - User['kafka'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,240 - User['hdfs'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,240 - User['yarn'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,241 - User['nifi'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,242 - User['mapred'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,242 - User['hbase'] {'gid': 'hadoop', 'groups': [u'hadoop']}
2016-01-30 10:31:01,243 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-01-30 10:31:01,244 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-01-30 10:31:01,248 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-01-30 10:31:01,248 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-01-30 10:31:01,249 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-01-30 10:31:01,250 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-01-30 10:31:01,254 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-01-30 10:31:01,254 - Group['hdfs'] {'ignore_failures': False}
2016-01-30 10:31:01,254 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', u'hdfs']}
2016-01-30 10:31:01,255 - Directory['/etc/hadoop'] {'mode': 0755}
2016-01-30 10:31:01,266 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-01-30 10:31:01,267 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-01-30 10:31:01,277 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-01-30 10:31:01,285 - Skipping Execute[('setenforce', '0')] due to only_if
2016-01-30 10:31:01,285 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-01-30 10:31:01,287 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-01-30 10:31:01,288 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-01-30 10:31:01,291 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-01-30 10:31:01,292 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-01-30 10:31:01,293 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-01-30 10:31:01,300 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-01-30 10:31:01,301 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-01-30 10:31:01,305 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-01-30 10:31:01,308 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-01-30 10:31:01,488 - File['/opt/nifi-1.1.1.0-12/conf/nifi.properties'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'}
2016-01-30 10:31:01,491 - File['/opt/nifi-1.1.1.0-12/conf/bootstrap.conf'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'}
2016-01-30 10:31:01,495 - File['/opt/nifi-1.1.1.0-12/conf/logback.xml'] {'owner': 'nifi', 'content': InlineTemplate(...), 'group': 'nifi'}
2016-01-30 10:31:01,496 - Execute['echo pid file /var/run/nifi/nifi.pid'] {}
2016-01-30 10:31:01,498 - Execute['echo JAVA_HOME=/usr/jdk64/jdk1.8.0_60'] {}
2016-01-30 10:31:01,501 - Execute['export JAVA_HOME=/usr/jdk64/jdk1.8.0_60;/opt/nifi-1.1.1.0-12/bin/nifi.sh start >> /var/log/nifi/nifi-setup.log'] {'user': 'nifi'}
2016-01-30 10:31:04,558 - Execute['cat /opt/nifi-1.1.1.0-12/bin/nifi.pid | grep pid | sed 's/pid=\(\.*\)/\1/' > /var/run/nifi/nifi.pid'] {}
2016-01-30 10:31:04,567 - Execute['chown nifi:nifi /var/run/nifi/nifi.pid'] {}
--------------------------- Again, when I start nifi from bash it works as expacted. Any help on how to fix this or better trace the problem is highly appreciated! br, Rainer
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
- « Previous
- Next »