当TCP连接超过65000~时,服务器将RST发送到客户端

我正在使用Java Netty的高负载tcp应用程序,它期望达到300k并发TCP连接.

它在测试服务器上运行完美,到达300k连接,但在部署到生产服务器时,它只能支持65387连接,到达此数字后,客户端将抛出“java.io.IOException:Connection reset by peer”异常.我尝试多次,每次连接达到65387时,客户端都无法创建连接.

网络捕获如下,10.95.196.27是服务器,10.95.196.29是客户端:

16822   12:26:12.480238 10.95.196.29    10.95.196.27    TCP 74  can-ferret > http [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=872641174 TSecr=0 WS=128
16823   12:26:12.480267 10.95.196.27    10.95.196.29    TCP 66  http > can-ferret [SYN, ACK] Seq=0 Ack=1 Win=2920 Len=0 MSS=1460 SACK_PERM=1 WS=1024
16824   12:26:12.480414 10.95.196.29    10.95.196.27    TCP 60  can-ferret > http [ACK] Seq=1 Ack=1 Win=14720 Len=0
16825   12:26:12.480612 10.95.196.27    10.95.196.29    TCP 54  http > can-ferret [FIN, ACK] Seq=1 Ack=1 Win=3072 Len=0
16826   12:26:12.480675 10.95.196.29    10.95.196.27    HTTP    94  Continuation or non-HTTP traffic
16827   12:26:12.480697 10.95.196.27    10.95.196.29    TCP 54  http > can-ferret [RST] Seq=1 Win=0 Len=0

客户端3与服务器握手后,服务器向客户端发送RST包,新连接中断的异常原因.

客户端异常堆栈如下:

16:42:05.826 [nioEventLoopGroup-1-15] WARN  i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the end of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_25]
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[na:1.7.0_25]
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:225) ~[na:1.7.0_25]
    at sun.nio.ch.IOUtil.read(IOUtil.java:193) ~[na:1.7.0_25]
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:375) ~[na:1.7.0_25]
    at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:259) ~[netty-all-4.0.0.Beta3.jar:na]
    at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:885) ~[netty-all-4.0.0.Beta3.jar:na]
    at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:226) ~[netty-all-4.0.0.Beta3.jar:na]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:72) ~[netty-all-4.0.0.Beta3.jar:na]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:460) ~[netty-all-4.0.0.Beta3.jar:na]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:424) ~[netty-all-4.0.0.Beta3.jar:na]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:360) ~[netty-all-4.0.0.Beta3.jar:na]
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:103) ~[netty-all-4.0.0.Beta3.jar:na]
    at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25]

Sever方面也没有例外.

我曾尝试将一些sysctl项目作为下方来支持巨大的连接,但它没用:

net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 4096 33554432
net.ipv4.tcp_wmem = 4096 4096 33554432
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.tcp_max_tw_buckets = 360000
net.core.netdev_max_backlog = 4096
vm.min_free_kbytes = 65536
vm.swappiness = 0
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_max_syn_backlog = 4096
net.netfilter.nf_conntrack_max = 3000000
net.nf_conntrack_max = 3000000
net.core.somaxconn = 327680

最大开放fd已设置为999999

linux-152k:~ # ulimit -n
999999

操作系统版本是带有3.0.13内核的SUSE Linux Enterprise Server 11 SP2:

linux-152k:~ # cat /etc/SuSE-release 
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 2
linux-152k:~ # uname -a
Linux linux-152k 3.0.13-0.27-default #1 SMP Wed Feb 15 13:33:49 UTC 2012 (d73692b) x86_64 x86_64 x86_64 GNU/Linux.

dmesg没有任何错误信息,CPU和内存保持低水平,每件事看起来都很好,只是服务器重置连接来自客户端.

我们有一个测试服务器,它是带有2.6.32内核的SUSE Linux Enterprise Server 11 SP1,它运行良好,可以支持多达300k的连接.

我想也许一些内核或安全限制导致这一点,但我找不到它,任何建议或任何方式来获得服务器发送RST的调试信息?谢谢.

最佳答案
Santal,我刚刚看到以下链接,它似乎可以回答你的问题:
What is the theoretical maximum number of open TCP connections that a modern Linux box can have

相关文章

    转载注明原文:当TCP连接超过65000~时,服务器将RST发送到客户端 - 代码日志