Member since
08-02-2018
46
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8748 | 08-09-2018 03:05 AM |
10-11-2021
08:25 AM
To resolve the issue, import the Ambari certificates to the Ambari truststore. To import the Ambari certificates, do the following:
STEP 1:
Get certificate from ambari-server
echo | openssl s_client -showcerts -connect <AMBARI_HOst>:<AMBARI_HTTPs_PORT> 2>&1 | sed --quiet '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/ambari_certificate.cr
STEP 2:
Get path of ambari trustore and truststore password from Ambari properties
cat /etc/ambari-server/conf/ambari.properties |grep truststore
As per your ambari.properties below is the path and password :-
ssl.trustStore.password=refer from ambari.property file
ssl.trustStore.path=/etc/ambari-server/conf/ambari-server-truststore
STEP 3:
keytool -importcert -file /tmp/ambari_certificate.crt -keystore <keystore-path>
STEP 4:
ambari-server restart
... View more
05-12-2020
07:03 PM
Hello, I have recently encountered a similar problem, it happens when I use hive to insert data into my table, then my cluster is hdp2.7.2, it is a newly built cluster, then when I check my namenode log I find this problem, but my active and standby are normal, there is no problem at all 2020-05-13 09:33:15,484 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 746 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 28 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.100.1.9:48350 Call#3590 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 28 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 219 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.100.1.12:52988 Call#3987 Retry#0: output error 2020-05-13 09:33:15,484 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 176 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.100.1.9:54838 Call#71 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 176 on 8020 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2910) at org.apache.hadoop.ipc.Server.access$2100(Server.java:138) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1223) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1295) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2266) at org.apache.hadoop.ipc.Server$Connection.access$400(Server.java:1375) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:734) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2391) 2020-05-13 09:33:15,485 WARN ipc.Server (Server.java:processResponse(1273)) - IPC Server handler 500 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.100.1.12:52988 Call#3988 Retry#0: output error 2020-05-13 09:33:15,485 INFO ipc.Server (Server.java:run(2402)) - IPC Server handler 219 on 8020 caught an exception
... View more
11-20-2018
02:45 PM
Thank you.........it works for me
... View more
10-25-2018
02:55 PM
Thank you so much @Akhil S Naik I have changed mpack from hdf 3.1 to hdf 3.2. My nifi installation is completed without upgrading HDP.
... View more
10-18-2018
06:20 PM
Since I was hit with the mining virus, it would continuously submit the mining procedure to port 8088, so I changed my yarn port to 8089 and solved it
... View more
10-17-2018
01:53 AM
I have posted the answer, in previous reply, can you please be specific? check this post, i replied there. http://community.cloudera.com/t5/Cloudera-Manager-Installation/Yarn-Node-Manager-unexpected-exists-occurring-after/m-p/79048#M14736
... View more
09-20-2018
03:56 PM
Please try one of below: 1. in beeline run: !connect 'jdbc:hive2://server1.abc.com:10000/default' 2. from shell command line: beeline -u 'jdbc:hive2://server1.abc.com:10000/default' See if that helps
... View more
07-27-2018
06:54 PM
1 Kudo
Ambari schedules a connection check to HS2 every 2 minutes. Beeline client is executed to connect to the HS2 JDBC string with a timeout on client side. IF beeline is not able to connect withing the timeout period, the beeline process is killed. This shows your HS2 is not configured properly. As it is taking more than 60 seconds to connect which is quite unusual. HS2 should connect with 4 - 30 seconds max, for it be usable by external tools like Power BI, alation, Ambari Hive Views. Please follow the following flogs in details to know more about how to debug the issue. https://community.hortonworks.com/content/kbentry/202479/hiveserver2-connection-establishment-phase-in-deta.html https://community.hortonworks.com/content/kbentry/202485/hiveserver2-configurations-deep-dive.html Some steps to know where is the bottelneck 1. Enable debug mode for Beeline 2. Execute beeline with the HS2 JDBC string, it will provide a detailed view of time to connect to AD, Zookeeper, HS2 and mysql 3. Tune HS2 parameters (Tez Session at Initilization = true , disable database scan for each connection= true) 4. Bump up heap of HS2 Blog has all that you need https://community.hortonworks.com/content/kbentry/209789/making-hiveserver2-instance-handle-more-than-500-c.html
... View more
02-24-2019
07:16 AM
Hi All, I'm planning to upgrade the cluster from HDP 2.6 to 3.0.Could you please provide me the steps to acheive the same. Need to know the back up and restore steps in details like Hive ,Ranger and KMS Database before proceeding the upgrade the components.The cluster is not having the HA or Kerberos.
... View more
07-23-2018
12:13 PM
@kanna k You will have to use the Ambari REST API in conjunction with a custom application that you create to measure the total uptime of the system through restarts. You can also use Ambari Alerts + notifications to get push notifications when services are stopped or started. That way, you can have an email with a timestamp of when a service was stopped/started and calculate the total uptime based on those numbers.
... View more