Member since
11-13-2018
22
Posts
0
Kudos Received
0
Solutions
01-14-2019
10:18 AM
Anyway, problem has been solved. Permission of .trash folder was wrong.
... View more
01-14-2019
10:05 AM
Can I delete .trash folder of zeppelin?
... View more
01-14-2019
09:49 AM
It has drwxr-xr-x
... View more
01-14-2019
09:26 AM
I already deleted the content of trash folder with hdfs dfs -expunge But still the same error appears: Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 282, in _run_command
result_dict = json.loads(out)
File "/usr/lib/ambari-agent/lib/ambari_simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/ambari-agent/lib/ambari_simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/ambari-agent/lib/ambari_simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
The above exception was the cause of the following exception:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 681, in <module>
Master().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 250, in start
self.create_zeppelin_dir(params)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 65, in create_zeppelin_dir
recursive_chmod=True
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 672, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 669, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 368, in action_delayed
self.action_delayed_for_nameservice(None, action_name, main_resource)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 400, in action_delayed_for_nameservice
self._set_owner(self.target_status)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 563, in _set_owner
self.util.run_command(path, 'SETOWNER', method='PUT', owner=owner, group=group, assertable_result=False)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 209, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 290, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT -d '' -H 'Content-Length: 0' 'http://foodscience-labroratory.com:50070/webhdfs/v1/user/zeppelin/.Trash/190114023342/user/hadoop/test/foodmeal_extracted.csv?op=SETOWNER&owner=zeppelin&group=&user.name=hdfs'' returned status_code=400.
Maybe there is something like a caching folder?
... View more
01-14-2019
07:49 AM
Hello, I can not start Zeppelin service from the Ambari launcher due to following error:
Traceback(most recent call last): File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 282,in _run_command result_dict = json.loads(out) File"/usr/lib/ambari-agent/lib/ambari_simplejson/__init__.py", line 307,in loads return _default_decoder.decode(s) File"/usr/lib/ambari-agent/lib/ambari_simplejson/decoder.py", line 335,in decode obj,end=self.raw_decode(s, idx=_w(s,0).end()) File"/usr/lib/ambari-agent/lib/ambari_simplejson/decoder.py", line 353,in raw_decode raiseValueError("No JSON object could be decoded") ValueError:No JSON object could be decoded The above exception was the cause of the following exception: Traceback(most recent call last): File"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 681,in<module> Master().execute() File"/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351,in execute method(env) File"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 250,in start self.create_zeppelin_dir(params) File"/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 65,in create_zeppelin_dir recursive_chmod=True File"/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166,in __init__ self.env.run() File"/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160,in run self.run_action(resource, action) File"/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124,in run_action provider_action() File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 672,in action_create_on_execute self.action_delayed("create") File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 669,in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name,self) File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 368,in action_delayed self.action_delayed_for_nameservice(None, action_name, main_resource) File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 400,in action_delayed_for_nameservice self._set_owner(self.target_status) File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 563,in _set_owner self.util.run_command(path,'SETOWNER', method='PUT', owner=owner,group=group, assertable_result=False) File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 209,in run_command returnself._run_command(*args,**kwargs) File"/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 290,in _run_command raiseWebHDFSCallException(err_msg, result_dict) resource_management.libraries.providers.hdfs_resource.WebHDFSCallException:Execution of 'curl -sS -L -w '%{http_code}' -X PUT -d '' -H 'Content-Length:0' 'http://foodscience-labroratory.com:50070/webhdfs/v1/user/zeppelin/.Trash/Current/user/hadoop/test/foodmeal_extracted.csv?op=SETOWNER&owner=zeppelin&group=&user.name=hdfs'' returned status_code=400.
How can I fix this? Where is this trash folder of Zeppelin in
HDFS located? Because the 400 error means the file in the trash is
already deleted but why is he trying to load the file? Is there a
command for the console to empty the zeppelin trash folder manually? Thanks in advance 😉
... View more
Labels:
01-14-2019
01:20 AM
Hello, I can not start Zeppelin service from the Ambari launcher due to following error: Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 282, in _run_command
result_dict = json.loads(out)
File "/usr/lib/ambari-agent/lib/ambari_simplejson/__init__.py", line 307, in loads
return _default_decoder.decode(s)
File "/usr/lib/ambari-agent/lib/ambari_simplejson/decoder.py", line 335, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/ambari-agent/lib/ambari_simplejson/decoder.py", line 353, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded
The above exception was the cause of the following exception:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 681, in <module>
Master().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 250, in start
self.create_zeppelin_dir(params)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/ZEPPELIN/package/scripts/master.py", line 65, in create_zeppelin_dir
recursive_chmod=True
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 672, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 669, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 368, in action_delayed
self.action_delayed_for_nameservice(None, action_name, main_resource)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 400, in action_delayed_for_nameservice
self._set_owner(self.target_status)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 563, in _set_owner
self.util.run_command(path, 'SETOWNER', method='PUT', owner=owner, group=group, assertable_result=False)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 209, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 290, in _run_command
raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT -d '' -H 'Content-Length: 0' 'http://foodscience-labroratory.com:50070/webhdfs/v1/user/zeppelin/.Trash/Current/user/hadoop/test/foodmeal_extracted.csv?op=SETOWNER&owner=zeppelin&group=&user.name=hdfs'' returned status_code=400. How can I fix this? Where is this trash folder of Zeppelin in HDFS located? Because the 400 error means the file in the trash is already deleted but why is he trying to load the file? Is there a command for the console to empty the zeppelin trash folder manually? Thanks in advance 😉
... View more
Labels:
12-20-2018
09:00 PM
Hello, I can not start the YARN Registry DNS service. It returns following error message: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/yarn_registry_dns.py", line 93, in <module>
RegistryDNS().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 351, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/yarn_registry_dns.py", line 53, in start
service('registrydns',action='start')
File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/service.py", line 93, in service
Execute(daemon_cmd, user = usr, not_if = check_process)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/3.0.0.0-1634/hadoop/libexec && /usr/hdp/3.0.0.0-1634/hadoop-yarn/bin/yarn --config /usr/hdp/3.0.0.0-1634/hadoop/conf --daemon start registrydns' returned 1. mesg: ttyname failed: Inappropriate ioctl for device
ERROR: Cannot set priority of registrydns process 8935 What's the problem with this service? Other YARN services are running flawless.
... View more
Labels:
12-13-2018
10:26 PM
Maybe we can discuss this in more detail in a skype session? Do you have contact data?
... View more
12-12-2018
01:42 PM
It s a multi node cluster (4 data nodes). I didn't install HBase on Ambari. I installed YARN service with Ambari and the Timeline Service worked until few days ago where this error emerged. How can I check if YARN client of region server is installed?
... View more
12-12-2018
12:30 AM
Hello, YARN Timeline Service Reader is not starting anymore due to following error: 2018-12-08 12:59:18,852 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=6, started=4859 ms ago, cancelled=false, msg=Call to examples.foodscience-01.de/163.49.39.115:17020 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: examples.foodscience-01.de/163.49.39.115:17020, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=examples.foodscience-01.de,17020,1543619998977, seqNum=-1
2018-12-08 12:59:22,895 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=7, retries=7, started=8902 ms ago, cancelled=false, msg=Call to examples.foodscience-01.de/163.49.39.115:17020 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: examples.foodscience-01.de/163.49.39.115:17020, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=examples.foodscience-01.de,17020,1543619998977, seqNum=-1
2018-12-08 12:59:32,955 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=8, retries=8, started=18962 ms ago, cancelled=false, msg=Call to examples.foodscience-01.de/163.49.39.115:17020 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: examples.foodscience-01.de/163.49.39.115:17020, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=examples.foodscience-01.de,17020,1543619998977, seqNum=-1
2018-12-08 12:59:42,965 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=9, retries=9, started=28972 ms ago, cancelled=false, msg=Call to examples.foodscience-01.de/163.49.39.115:17020 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: examples.foodscience-01.de/163.49.39.115:17020, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=examples.foodscience-01.de,17020,1543619998977, seqNum=-1
2018-12-08 12:59:53,064 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=10, retries=10, started=39071 ms ago, cancelled=false, msg=Call to examples.foodscience-01.de/163.49.39.115:17020 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: examples.foodscience-01.de/163.49.39.115:17020, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=examples.foodscience-01.de,17020,1543619998977, seqNum=-1
2018-12-08 13:00:03,101 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=11, retries=11, started=49108 ms ago, cancelled=false, msg=Call to examples.foodscience-01.de/163.49.39.115:17020 failed on connection exception: org.apache.hbase.th It seems that HBase has a problem (although I am not using this service on Ambari). Then I checked following log file hadoop-yarn-timelinereader-foodscience-01.log Caused by: java.net.ConnectException: Call to examples.foodscience-01.de/163.49.39.115:17020 failed on connection exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: examples.foodscience-01.de/163.49.39.115:17020
at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:165)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406)
at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:92)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:307)
at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1377)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:929)
at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:179)
at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$500(NettyRpcConnection.java:71)
at org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:269)
at org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:263)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:327)
at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:343)
at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
... 1 more
Caused by: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: examples.foodscience-01.de/163.49.39.115:17020
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hbase.thirdparty.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:323)
at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:340)
... 7 more
Caused by: java.net.ConnectException: Connection refused
... 11 more
2018-12-06 13:03:33,051 INFO zookeeper.ReadOnlyZKClient (ReadOnlyZKClient.java:run(315)) - 0x4d465b11 no activities for 60000 ms, close active connection. Will reconnect next time when there are new requests.
2018-12-06 13:03:57,614 INFO storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(170)) - Running HBase liveness monitor
2018-12-06 13:04:24,100 ERROR reader.TimelineReaderServer (LogAdapter.java:error(75)) - RECEIVED SIGNAL 15: SIGTERM
2018-12-06 13:04:24,116 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@12299890{/,null,UNAVAILABLE}{/timeline}
2018-12-06 13:04:24,125 INFO server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@328af33d{HTTP/1.1,[http/1.1]}{0.0.0.0:8198}
2018-12-06 13:04:24,128 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@7d3e8655{/static,jar:file:/usr/hdp/3.0.0.0-1634/hadoop-yarn/hadoop-yarn-common-3.1.0.3.0.0.0-1634.jar!/webapps/static,UNAVAILABLE}
2018-12-06 13:04:24,128 INFO handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@7dfd3c81{/logs,file:///var/log/hadoop-yarn/yarn/,UNAVAILABLE}
2018-12-06 13:04:24,142 INFO storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:serviceStop(108)) - closing the hbase Connection
2018-12-06 13:04:24,143 INFO zookeeper.ReadOnlyZKClient (ReadOnlyZKClient.java:close(342)) - Close zookeeper connection 0x4d465b11 to examples.foodscience-01.de:2181,examples.foodscience-02.de:2181,examples.foodscience-03.de:2181
2018-12-06 13:04:24,143 WARN storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(183)) - Got failure attempting to read from timeline storage, assuming HBase down
java.io.UncheckedIOException: java.io.InterruptedIOException
at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55)
at org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:283)
at org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl$HBaseMonitor.run(HBaseTimelineReaderImpl.java:174)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.InterruptedIOException
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:246)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:834)
at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:732)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:325)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437)
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597)
at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:53)
... 9 more
2018-12-06 13:04:24,153 INFO zookeeper.ReadOnlyZKClient (ReadOnlyZKClient.java:close(342)) - Close zookeeper connection 0x5b7a5baa to examples.foodscience-01.de:2181,examples.foodscience-02.de:2181,examples.foodscience-03.de:2181
2018-12-06 13:04:24,155 INFO reader.TimelineReaderServer (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down TimelineReaderServer at examples.foodscience-01.de/163.49.39.115 I dont know why this error appears when starting the timeline service. How can this be fixed?
... View more
Labels: