1
votes

Newbies having trouble processing a UDP video stream using Netty 3.2.4. On different machines we see dropped bytes etc using Netty. We have a little counter after Netty gets the bytes in, to see how many bytes are received. The variance is more than what just what UDP unreliability would account for. In our case, we also save the bytes to a file to playback the video. Playing the video in VLC really illustrates the dropped bytes. (Packet sizes being sent were around 1000 Bytes).

Questions

  • Are our assumptions of the Netty API correct in terms of not being able to use the AdaptiveReceiveBufferSizePredictor for UDP stream listener?
  • Is there a better explanation of the behavior we're seeing?
  • Is there a better solution? Is there a way to use an adaptive predictor with UDP?

What We've Tried

...
DatagramChannelFactory datagramChannelFactory = new OioDatagramChannelFactory(
                                                Executors.newCachedThreadPool());
connectionlessBootstrap = new ConnectionlessBootstrap(datagramChannelFactory);
...
datagramChannel = (DatagramChannel) connectionlessBootstrap.bind(new 
                  InetSocketAddress(multicastPort));
datagramChannel.getConfig().setReceiveBufferSizePredictor(new   
                FixedReceiveBufferSizePredictor(2*1024*1024));
...
  • From documentation and Google searches, I think the correct way to do this is to use a OioDatagramChannelFactory instead of a NioDatagramChannelFactory.

  • Additionally, while I couldn't find it explicity stated, you can only use a FixedReceiveBufferSizePredictor with the OioDatagramChannelFactory (vs AdaptiveReceiveBufferSizePredictor). We found this out by looking at the source code and realizing that the AdaptiveReceiveBufferSizePredictor's previousReceiveBufferSize() method was not being called from the OioDatagramWorker class (whereas it was called from the NioDatagramWorker)

  • So, we originally set the FixedReceivedBufferSizePredictor to (2*1024*1024)

Observed Behavior

  • Running on different machines(different processing power) we're seeing a different number of bytes being taken in by Netty. In our case, we are streaming video via UDP and we are able to use the playback of the streamed bytes to diagnose the quality of the bytes read in (Packet sizes being sent were around 1000 Bytes).

  • We then experimented with different buffer sizes and found that 1024*1024 seemed to make things work better...but really have no clue why.

  • In looking at how FixedReceivedBufferSizePredictor works, we realized that it simply creates a new buffer each time a packet comes in. In our case it would create a new buffer of 2*1024*1024 Bytes whether the packet was 1000 Bytes or 3 MB. Our packets were only 1000 Bytes, so we we didn't think that was our problem. Could any of this logic in here be causing a performance problem? For example, the creation of the buffer each time a packet comes in?

Our Workaround

We then thought about ways to make the buffer size dynamic but realized we couldn't use the AdaptiveReceiveBufferSizePredictor as noted above. We experimented and created our own MyAdaptiveReceiveBufferSizePredictor along with the accompanying MyOioDatagramChannelFactory, *Channel, *ChannelFactory, *PipelineSink, *Worker classes (that eventually call the MyAdaptiveReceiveBufferSizePredictor). The predictor simply changes the buffer size to double the buffer size based on the last packet size or reduce it. This seemed to improve things.

1

1 Answers

0
votes

Not right sure what causes your performance issues but I found this thread.
It might be caused by the creation of ChannelBuffers for each incoming packet in which case you'll have to wait for Milestone 4.0.0.