Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 752 | 06-04-2025 11:36 PM | |
| 1332 | 03-23-2025 05:23 AM | |
| 660 | 03-17-2025 10:18 AM | |
| 2385 | 03-05-2025 01:34 PM | |
| 1554 | 03-03-2025 01:09 PM |
05-12-2020
07:03 PM
Hello, I have recently encountered a similar problem, it happens when I use hive to insert data into my table, then my cluster is hdp2.7.2, it is a newly built cluster, then when I check my namenode log I find this problem, but my active and standby are normal, there is no problem at all 2020-05-13 09:33:15,484 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 746 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 28 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.100.1.9:48350 Call#3590 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 28 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 219 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.100.1.12:52988 Call#3987 Retry#0: output error 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 176 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.100.1.9:54838 Call#71 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 176 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,485 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 500 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.100.1.12:52988 Call#3988 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 219 on 8020 caught an exception
... View more
04-11-2020
03:19 AM
Hi,
NOTE:- Parquet is hard coded to write the temporary data in /tmp even though the target directory is different.
Kindly check /tmp for intermediate data, you will see it there.
Regards
... View more
04-07-2020
01:42 PM
@SHADA please take note this thread was closed can you open a new thread and attach the errors,logs or screenshot of the error you are encountering and remember to be precise on the version of the sandbox whether it VMware,Docker or Virtualbox Tag me in the new thread
... View more
04-02-2020
10:23 PM
It looks like an OU issue. OU in AD and ranger should be the same for a group or a user.
... View more
04-02-2020
03:01 AM
ambari installation full steps centos7 hostnamectl set-hostname your_hostname systemctl get-default systemctl set-default multi-user.target nano(vim) /etc/hosts ==> add , ip_of_machine name_of_machine systemctl disable firewalld vim /etc.sysconfig/network ==> HOSTNAME=machine_hostname NETWORKING=yes vim /etc/sysctl.conf ==> add, vm.swappiness=10 vim /etc/selinux/config ==> change ,selinux=disabled yum install ntp systemctl enable ntp systemctl start ntp vim /etc/rc.local==> enbale huge page :add , echo never > /sys/kernel/mm/transparent_hugepage/defrag echo never > /sys/kernel/mm/transparent_hugepage/enabled; cd /etc/yum.repos.d yum install wget yum install ambari repo and favorite one here is 2.7.3 wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.7.3.0/ambari.repo -O /etc/yum.repos.d/ambari.repo yum repolist yum install ambari-server ambari-agent vim /etc/ambari-agent/conf/ambari-agent.ini ==>hostname=your_hostname in security section add ==>force_https_protocol=PROTOCOL_TLSV1_2 ambari-server setup ==> dont enter advanced setup and install jdk ambari-server start go to the web page enter (hostname:8080) or machine_ip:8080 enjoy . please accept the answer after trying; best regards.
... View more
04-02-2020
01:52 AM
You can try logging into the admin user and restart datanodes from the actions bar in Dashboard. That worked for me. May work for you too.
... View more
03-31-2020
04:33 AM
Hi Andreas Kühnert
We are facing same issue, we don't want to install hbase as our VM's as we don't have capacity to handle HDFS and Hbase on same cluster. Do we have any work around for this
thanks in advance for the suggestion
Ram
... View more
03-31-2020
03:06 AM
you should install ambari-server and ambari-agent on the first node wich you want to install hdfs service for example . the other nodes install ambari-agent only . dont forget to change (ambari-agent.ini ) hostname and the hosts file with (ip and hostname of all machines.
... View more
03-26-2020
07:18 AM
Hi @Shelton, We have disabled ranger authorization in Ambari & allow to run hive as end user instead of hive user. Still hiveserver2 is not coming up. 2020-03-26 05:36:05,838 - call['/usr/hdp/current/zookeeper-client/bin/zkCli.sh -server <master node>:2181,<datanode1>:2181,<data node3>:2181 ls /hiveserver2 | grep 'serverUri=''] {} 2020-03-26 05:36:06,497 - call returned (1, 'Node does not exist: /hiveserver2') 2020-03-26 05:36:06,498 - Will retry 1 time(s), caught exception: ZooKeeper node /hiveserver2 is not ready yet. Sleeping for 10 sec(s) Any clue on this ?
... View more
03-25-2020
10:38 AM
Below are the steps to troubleshoot distcp:- 1. it is not problem with the hdfs or Kerberos or distcp but a MapReduce. 2. We tried to run a sample MR job to test, then it failed with the following exception Error: Java.io.IOException: initialization of all the collectors failed. Error in last collector was:java.io.IOException: Invalid “mapreduce.task.io.sort.mb”:3276. (The total amount of buffer memory to use while sorting files, in MB). It was expecting less than 2048. Changing this property able to run the distcp smooth. I want to take a moment and say thanks to Shelton for responding it on time.
... View more