Member since
04-10-2020
9
Posts
0
Kudos Received
0
Solutions
05-07-2020
08:52 AM
Can anyone help me what's wrong in configuration that datanode and name node is not connected
... View more
05-04-2020
05:16 PM
it is still unresolved
... View more
04-24-2020
09:03 AM
I downloaded the BIN file from the URL I mentioned in my initial message and followed the next steps to complete the installation. After installation opens the cloudera manager web url and completed cluster installation as root. After completion of installation it failed in oozie and upon checking the log found HDFS namenode and datanode is not connecting each other and stalled there. It seems HDFS namenode and datanode is never connecting each each other and getting the error I mentioned in my initial message.
... View more
04-23-2020
09:27 AM
Any help is highly appreciated. The issue is still unresolved.
... View more
04-19-2020
09:06 PM
I am using ubuntu 14.04 OS , I installed from the below bin file downloaded using the below link. I am running on my laptop , no multiple client. I did not checked the box as single node user while installing the cluster though. I have already shared the host file. I am reinstalling OS by formatting partition and installing again. https://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin
... View more
04-19-2020
10:14 AM
It will create a new table but the requirement is to show the results of SQL in tab delimited. If we create a now table as \t delimited and then again select * from that new format table , wouldn't it will show with default "|" delimited.
... View more
04-19-2020
10:08 AM
I installed cloudera as single node in my laptop and insalled all the steps as per the documents. The namenode and datanode seems to be not connected. Also I am getting below error when trying to put a filein hdfs.
20/04/19 12:58:49 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1723) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3508) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:694) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:219) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2272)
at org.apache.hadoop.ipc.Client.call(Client.java:1504) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:231) at com.sun.proxy.$Proxy16.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:425) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104) at com.sun.proxy.$Proxy17.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1875) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1671) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:790) put: File /user/hive/test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
The below is the hostname and /etc/hosts conf
arindam@arindam-pc-ubuntu:/etc/hadoop/conf$ hostname -f arindam-pc-ubuntu arindam@arindam-pc-ubuntu:/etc/hadoop/conf$ cat /etc/hosts 127.0.1.1 arindam-pc-ubuntu 127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters
Below is details of hdfs-site.xml
<!--Autogenerated by Cloudera Manager--> <configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///dfs/nn</value> </property> <property> <name>dfs.namenode.servicerpc-address</name> <value>arindam-pc-ubuntu:8022</value> </property> <property> <name>dfs.https.address</name> <value>arindam-pc-ubuntu:50470</value> </property> <property> <name>dfs.https.port</name> <value>50470</value> </property> <property> <name>dfs.namenode.http-address</name> <value>arindam-pc-ubuntu:50070</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.blocksize</name> <value>134217728</value> </property> <property> <name>dfs.client.use.datanode.hostname</name> <value>true</value> </property> <property> <name>fs.permissions.umask-mode</name> <value>022</value> </property> <property> <name>dfs.namenode.acls.enabled</name> <value>false</value> </property> <property> <name>dfs.client.use.legacy.blockreader</name> <value>false</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>false</value> </property> <property> <name>dfs.domain.socket.path</name> <value>/var/run/hdfs-sockets/dn</value> </property> <property> <name>dfs.client.read.shortcircuit.skip.checksum</name> <value>false</value> </property> <property> <name>dfs.client.domain.socket.data.traffic</name> <value>false</value> </property> <property> <name>dfs.datanode.hdfs-blocks-metadata.enabled</name> <value>true</value> </property> </configuration>
----------------------------------------------------------------------------------------
Below is core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://arindam-pc-ubuntu:8020</value> </property> <property> <name>fs.trash.interval</name> <value>1</value> </property> <property> <name>io.compression.codecs</name> <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value> </property> <property> <name>hadoop.security.authentication</name> <value>simple</value> </property> <property> <name>hadoop.security.authorization</name> <value>false</value> </property> <property> <name>hadoop.rpc.protection</name> <value>authentication</value> </property> <property> <name>hadoop.security.auth_to_local</name> <value>DEFAULT</value> </property> <property> <name>hadoop.proxyuser.oozie.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.oozie.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.mapred.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.mapred.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.flume.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.flume.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.HTTP.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.HTTP.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hive.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hive.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.httpfs.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.httpfs.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hdfs.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hdfs.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.yarn.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.yarn.groups</name> <value>*</value> </property> <property> <name>hadoop.security.group.mapping</name> <value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value> </property> <property> <name>hadoop.security.instrumentation.requires.admin</name> <value>false</value> </property> <property> <name>net.topology.script.file.name</name> <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value> </property> <property> <name>io.file.buffer.size</name> <value>65536</value> </property> <property> <name>hadoop.ssl.enabled</name> <value>false</value> </property> <property> <name>hadoop.ssl.require.client.cert</name> <value>false</value> <final>true</final> </property> <property> <name>hadoop.ssl.keystores.factory.class</name> <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value> <final>true</final> </property> <property> <name>hadoop.ssl.server.conf</name> <value>ssl-server.xml</value> <final>true</final> </property> <property> <name>hadoop.ssl.client.conf</name> <value>ssl-client.xml</value> <final>true</final> </property> </configuration>
Please let me know why this error is there. This seems to be never fixed and not sure whether this issue is when I have followed all the steps as per cloudera instaructions.
... View more
Labels:
04-10-2020
09:39 PM
HI,
I am planning to run select queries saves in .sql file. I wanted to run those queries in hive and
wanted to show results of my select queries as tab delimited rather default "|" . Please let me know what is the way to show query results as tab delimited in hive.
... View more
Labels:
- Labels:
-
Apache Hive