Support Questions

Find answers, ask questions, and share your expertise

Datanode IPC port TIME_WAIT

Expert Contributor

hello to all,
I have many connections in time_wait on the ipc port 1019 of the datanode:
More than 600 time_wait and about 250 established.
Is that normal?

I’m afraid that’s why index write closed on solr errors (the index is on hdfs).
The servers are downloaded and the datanode does not saturate the jvm heap.

I couldn’t find any max connection configuration for port 1019
Any ideas?

 

Environment:

HDP 3.1.5.0-152 with HDFS 3.1.1

 

Thanks in advance

1 ACCEPTED SOLUTION

Expert Contributor

Hi @isoardi ,

 

Seeing sockets in TIME_WAIT state is normal and is by design when the socket is getting closed. Unless we see tens of thousands of sockets in TIME_WAIT state which would consume the ephemeral ports on the host , these are fine. It would be the CLOSE_WAIT sockets we need check which indicates the application has not called the close() call on the socket.

 

You can refer the below RedHat documentation for more info on this and ways to close the TW sockets by reusing them.

 

https://access.redhat.com/solutions/24154

 

 

View solution in original post

2 REPLIES 2

Expert Contributor

Hi @isoardi ,

 

Seeing sockets in TIME_WAIT state is normal and is by design when the socket is getting closed. Unless we see tens of thousands of sockets in TIME_WAIT state which would consume the ephemeral ports on the host , these are fine. It would be the CLOSE_WAIT sockets we need check which indicates the application has not called the close() call on the socket.

 

You can refer the below RedHat documentation for more info on this and ways to close the TW sockets by reusing them.

 

https://access.redhat.com/solutions/24154

 

 

Expert Contributor

Hi @rki_ ,

Thanks for the explanation.

I had hoped to have found a reason for the index writer closed error of SolR.

 

Thanks you anyway

Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.