Support Questions

Find answers, ask questions, and share your expertise

HDF 2.1.1 Won't restart after Hbase down

avatar
Master Guru

My NiFi won't restart after it was shutdown. During the shutdown I lost access to my HBase and Hadoop clusters and they are not coming back. I would like to restart NiFi and remove components accessing those no longer available services. But it won't start up even after waiting a few hours.

2017-02-16 01:04:48,660 ERROR [StandardProcessScheduler Thread-3] o.a.n.c.s.StandardControllerServiceNode HBase_1_1_2_ClientService[id=1a5f679f-015a-1000-dd13-6c12daef3685] Failed to invoke @OnEnabled method due to java.io.IOException: java.lang.reflect.InvocationTargetException 2017-02-16 01:04:48,664 ERROR [StandardProcessScheduler Thread-3] o.a.n.c.s.StandardControllerServiceNode java.io.IOException: java.lang.reflect.InvocationTargetException at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240) ~[na:na] at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) ~[na:na] at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) ~[na:na] at org.apache.nifi.hbase.HBase_1_1_2_ClientService.createConnection(HBase_1_1_2_ClientService.java:241) ~[na:na] at org.apache.nifi.hbase.HBase_1_1_2_ClientService.onEnabled(HBase_1_1_2_ClientService.java:181) ~[na:na] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_91] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_91] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91] at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:137) ~[na:na] at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:125) ~[na:na] at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotations(ReflectionUtils.java:70) ~[na:na] at org.apache.nifi.util.ReflectionUtils.invokeMethodsWithAnnotation(ReflectionUtils.java:47) ~[na:na] at org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:345) ~[na:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_91] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_91] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91] Caused by: java.lang.reflect.InvocationTargetException: null at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_91] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_91] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_91] at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_91] at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) ~[na:na] ... 20 common frames omitted Caused by: java.lang.IllegalArgumentException: A HostProvider may not be empty! at org.apache.phoenix.shaded.org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:63) ~[na:na] at org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445) ~[na:na] at org.apache.phoenix.shaded.org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380) ~[na:na] at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:141) ~[na:na] at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) ~[na:na] at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:418) ~[na:na] at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) ~[na:na] at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105) ~[na:na] at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:886) ~[na:na] at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:642) ~[na:na] ... 25 common frames omitted 2017-02-16 01:04:48,664 ERROR [StandardProcessScheduler Thread-3] o.a.n.c.s.StandardControllerServiceNode Failed to invoke @OnEnabled method of HBase_1_1_2_ClientService[id=1a5f679f-015a-1000-dd13-6c12daef3685] due to java.io.IOException: java.lang.reflect.InvocationTargetException 2017-02-16 01:04:55,143 INFO [Finalizer] o.a.h.h.c.ConnectionManager$HConnectionImplementation Closing zookeeper sessionid=0xffffffffffffffff ^C

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Timothy Spann

NiFi is not coming up or the NiFi UI can't be accessed?

Is NiFi actually shutting back down after you start it?

I suggest you try setting the following property to false in the nifi.properties file and restart your NiFi to see fi you can then access your UI to remove the problematic components:

nifi.flowcontroller.autoResumeState=false

If using Ambari to manage your NiFi, uncheck the box:

12568-screen-shot-2017-02-16-at-82114-am.png

Thanks,

Matt

View solution in original post

2 REPLIES 2

avatar
Master Mentor

@Timothy Spann

NiFi is not coming up or the NiFi UI can't be accessed?

Is NiFi actually shutting back down after you start it?

I suggest you try setting the following property to false in the nifi.properties file and restart your NiFi to see fi you can then access your UI to remove the problematic components:

nifi.flowcontroller.autoResumeState=false

If using Ambari to manage your NiFi, uncheck the box:

12568-screen-shot-2017-02-16-at-82114-am.png

Thanks,

Matt

avatar
Master Guru

Thanks for the config tip, really helpful. It eventually came up by the morning. I think the issue is that my Hadoop cluster is down and I have at least 50 connections to it. I imagine a few minute time out per connection, 3-6 connections each, that's a big chunk of time.