1
votes

I have a high volume netty server that keeps consuming memory. Using jmap, I've tracked it down to the fact that pipelines just seem to keep growing and growing (along with nio sockets, etc). It is like the socket isn't ever disconnecting.

My initialization of the ServerBootstrap is:

    ServerBootstrap bootstrap = new ServerBootstrap(new NioServerSocketChannelFactory(coreThreads, workThreads, Runtime.getRuntime().availableProcessors()*2));
    bootstrap.setOption("child.keepAlive", false);
    bootstrap.setOption("child.tcpNoDelay", true);
    bootstrap.setPipelineFactory(new HttpChannelPipelineFactory(this, HttpServer.IdleTimer));
    bootstrap.bind(new InetSocketAddress(host, port));

coreThreads and workThreads are java.util.concurrent.Executors.newCachedThreadPool().

IdleTimer is private static Timer IdleTimer = new HashedWheelTimer();

My pipeline factory is:

    ChannelPipeline pipeline = Channels.pipeline();
    pipeline.addLast("idletimer", new HttpIdleHandler(timer));
    pipeline.addLast("decoder", new HttpRequestDecoder());
    pipeline.addLast("aggregator", new HttpChunkAggregator(65536));
    pipeline.addLast("encoder", new HttpResponseEncoder());
    pipeline.addLast("chunkwriter", new ChunkedWriteHandler());
    pipeline.addLast("http.handler" , handler);
    pipeline.addLast("http.closer", new HttpClose());

HttpIdleHandler is the basic stock idle handler given in the examples except using the "all". It doesn't get executed that often. The timeout is 500 milliseconds. (aka 1/2 second). The idle handler calls close on the channel. The HttpClose() is a simple close the channel on everything that makes it there just in case the handler doesn't process it. It executes very irregularly.

Once I've sent the response in my handler (derived from SimpleChannelUpstreamHandler), I close the channel regardless of keepalive setting. I've verified that I'm closing channels by adding a listener to the channels ChannelFuture returned by close() and the value of isSuccess in the listener is true.

Some examples from the jmap output (columns are rank, number of instances, size in bytes, classname):

 3:        147168        7064064  java.util.HashMap$Entry
 4:         90609        6523848  org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext
 6:         19788        3554584  [Ljava.util.HashMap$Entry;
 8:         49893        3193152  org.jboss.netty.handler.codec.http.HttpHeaders$Entry
11:         11326        2355808  org.jboss.netty.channel.socket.nio.NioAcceptedSocketChannel
24:         11326         996688  org.jboss.netty.handler.codec.http.HttpRequestDecoder
26:         22668         906720  org.jboss.netty.util.internal.LinkedTransferQueue
28:          5165         826400  [Lorg.jboss.netty.handler.codec.http.HttpHeaders$Entry;
30:         11327         815544  org.jboss.netty.channel.AbstractChannel$ChannelCloseFuture
31:         11326         815472  org.jboss.netty.channel.socket.nio.DefaultNioSocketChannelConfig
33:         12107         774848  java.util.HashMap
34:         11351         726464  org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout
36:         11327         634312  org.jboss.netty.channel.DefaultChannelPipeline
38:         11326         634256  org.jboss.netty.handler.timeout.IdleStateHandler$State
45:         10417         500016  org.jboss.netty.util.internal.LinkedTransferQueue$Node
46:          9661         463728  org.jboss.netty.util.internal.ConcurrentIdentityHashMap$HashEntry
47:         11326         453040  org.jboss.netty.handler.stream.ChunkedWriteHandler
48:         11326         453040  org.jboss.netty.channel.socket.nio.NioSocketChannel$WriteRequestQueue
51:         11326         362432  org.jboss.netty.handler.codec.http.HttpChunkAggregator
52:         11326         362432  org.jboss.netty.util.internal.ThreadLocalBoolean
53:         11293         361376  org.jboss.netty.handler.timeout.IdleStateHandler$AllIdleTimeoutTask
57:          4150         323600  [Lorg.jboss.netty.util.internal.ConcurrentIdentityHashMap$HashEntry;
58:          4976         318464  org.jboss.netty.handler.codec.http.DefaultHttpRequest
64:         11327         271848  org.jboss.netty.channel.SucceededChannelFuture
65:         11326         271824  org.jboss.netty.handler.codec.http.HttpResponseEncoder
67:         11326         271824  org.jboss.netty.channel.socket.nio.NioSocketChannel$WriteTask
73:          5370         214800  org.jboss.netty.channel.UpstreamMessageEvent
74:          5000         200000  org.jboss.netty.channel.AdaptiveReceiveBufferSizePredictor
81:          5165         165280  org.jboss.netty.handler.codec.http.HttpHeaders
84:          1562         149952  org.jboss.netty.handler.codec.http.DefaultCookie
96:          2048          98304  org.jboss.netty.util.internal.ConcurrentIdentityHashMap$Segment
98:          2293          91720  org.jboss.netty.buffer.BigEndianHeapChannelBuffer

What am I missing? What thread is responsible for releasing it's reference to the pipeline (or socket? channel?) such that the garbage collector will collect this memory? There appears to be some large hashtable holding on to them (several references to hashtable entries that I filtered out of the above list).

1

1 Answers

2
votes

Unless you have a reference to Channel, ChannelPipeline, ChannelHandlerContext in your application, they should become unreachable as soon as the connection is closed. Please double-check if your application is hold a reference to one of them somewhere. Sometimes an anonymous class is a good suspect, but the precise answer will not be possible with the heap dump file.