Member since
12-12-2017
12
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17222 | 01-23-2018 01:25 PM |
01-28-2018
01:45 PM
anyone has any idea?
... View more
01-23-2018
02:42 PM
not sure what you mean but at least from my end i can view all the error
... View more
01-23-2018
02:10 PM
Hey Manmad if Ill do what you offered It will just upload the file to my local hdfs and not to the vm hdfs. and in my version of hdfs you cant use module fs it doesnt exist.
... View more
01-23-2018
01:25 PM
1 Kudo
after some time and the issue showing up from time to time I discovered it was an IP problem . using the virtualbox fixed it because it uses a default different IP . the problem with the IP was caused by some limitations set up in my computer by my company.
... View more
01-23-2018
01:01 PM
Im trying to fix this issue for a couple of days and I tried many things but non seem to work. when trying to move any file from my local computer to the vm by using hdfs dfs -put /someRandomFile hdfs://sandbox.hortonworks.com/tmp (in /etc/hosts sandbox.hortonworks.com=127.0.0.1) I get this error: 18/01/23 14:45:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/23 14:45:32 INFO hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:398)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1698)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1619)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)
18/01/23 14:45:32 WARN hdfs.DataStreamer: Abandoning BP-1691134265-172.17.0.2-1510324659694:blk_1073742750_1935
18/01/23 14:45:32 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[172.17.0.2:50010,DS-526760e8-383f-41d4-9009-32a6ade1405e,DISK]
18/01/23 14:45:32 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/.profile._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1719)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3368)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3292)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
at org.apache.hadoop.ipc.Client.call(Client.java:1429)
at org.apache.hadoop.ipc.Client.call(Client.java:1339)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1809)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1609)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)
put: File /tmp/.profile._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation. I tried taking firewall off , adding the dfs.client.use.datanode.hostname property synch timedate and more but nothing seems to work. does anyone know what may cause this issue?
... View more
Labels:
- Labels:
-
Apache Hadoop
12-25-2017
08:32 AM
1 Kudo
@Edgar Orendain working with oracle virtual box fixed it for me. im pretty sure its connected to my env automatically masking the default vmware IPs
... View more
12-24-2017
08:33 AM
@Edgar Orendain no I did not get past this issue..
... View more
12-24-2017
08:30 AM
Akshaya Nat if i do what you suggested alot of the hadoop components dont work for example hbase so im not sure its helpfull
... View more
12-21-2017
06:15 PM
just installed a new sandbox and started the tutorial. cant upload a new table ( or perform any select command ) but if i use MR and not tez in hive it will work. after some digging around I saw that tez doesn't succeed in creating an app ( hive -hiveconf hive.root.logger=ALL,console ) and I found log files in the hdfs in /usr/admin/hive/jobs which gives me the following content :
============================
Logs for Query 'use default'
============================
======================================================================================================================
Logs for Query 'select d.*, t.hours_logged, t.miles_logged
from drivers d join timesheet t
on d.driverId = t.driverId'
======================================================================================================================
INFO : Tez session hasn't been created yet. Opening session
ERROR : Failed to execute tez graph.
org.apache.tez.dag.api.SessionNotRunning: TezSession has already shutdown. Application application_1513849021521_0019 failed 2 times due to AM Container for appattempt_1513849021521_0019_000002 exited with exitCode: -1000
For more detailed output, check the application tracking page: http://sandbox-hdp.hortonworks.com:8088/cluster/app/application_1513849021521_0019 Then click on links to logs of each attempt.
Diagnostics: org.apache.hadoop.fs.ChecksumException: Checksum error: /hdp/apps/2.6.3.0-235/tez/tez.tar.gz at 41811968 exp: -826577146 got: -1058145232
Failing this attempt. Failing the application.
at org.apache.tez.client.TezClient.waitTillReady(TezClient.java:699)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:218)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:286)
at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:165)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:162)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1756)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1497)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1294)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1156)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197)
at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76)
at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Thank you very much
... View more
Labels:
12-21-2017
08:29 AM
how did you delete the network manager? thanks
... View more
12-12-2017
08:32 AM
did anyone get to a solution? its still happening.
... View more