1
votes

I'm designing a new protocol called DITP. It is a connection oriented protocol that would use TCP as transport layer. With common Internet protocols, when the TCP connection is established, the server starts by sending a greeting message to what the client respond, eventually sending its first request.

I figured out I could save one round trip time by inverting the initial protocol transaction. The client starts by sending the greeting followed by the first request.

The following graphic shows a comparison between the two protocol transaction timings and how it saves one round trip time.

Common protocol and DITP protocol comparison
(source: disnetwork.info)

You may want to read the following blog note for a more detailed explanation. http://www.disnetwork.info/1/post/2008/08/optimizing-ditp-connection-open.html

I had two questions to network programming experts of StackOverflow :

  1. Is this assumption correct ?

  2. Why common protocols don't use this ?

This method could provide a significant performance optimization for long distance connections where communication latency is high and connections are to be established frequently. HTTP would have been a good candidate.

EDIT: Oops big mistake. HTTP uses the optimized method where the client sends the request directly. There is no greeting transaction as with SMTP.See Wikipedia Hypertext Transfer Protocol page.

5

5 Answers

1
votes

It isn't done largely because because:

a.) The client may need to know what version of protocol the server uses

b.) You won't even know you really are talking to a server that supports the protocol.

In short, it often makes sense to know what you're talking to before spewing data at it.

1
votes

I wonder if this design might not be said to be a violation of Postel's Law, since it's assuming things about the receiver, and thereby about what is legal to send, before knowing.

I would at least expect this principle to be the reason most protocols are designed so that they spend a roundtrip to figure out more about the other end, before sending data that might not be understood at all.

0
votes

If delay is your main concern, you may want to look at LPT, a protocol that is specifically designed for connections with extremely long round-trip times.

When designing a new transport protocol, you should pay attention to congestion control and what firewall are going to do, when they encounter packets of an unknown protocol.

0
votes

Design goals of the protocols like HTTP,SMTP were not the speed, rather reliability under flaky physical network conditions and the meagre bandwidth utilisation. Largely these conditions have changed now with better hardware.

Your design should be look at in the light of the network conditions you are bound to encounter, reliability required, latency and bandwidth utilisation of your intended application.

0
votes
  1. In theory, this is correct.
  2. Common protocols don't use this, because it's inefficient. Client would have to split the data streams, so they would have to be distinguishable. Server would have to take care about this, for example by packing each data piece in a container (XML, JSON, Bitorrent-like, You name it). And the container is just an unnecessary data overhead, slowing down the transfer.

Why wouldn't one just open several TCP sockets and send separate requests over those multiple connections? No overhead here! Oh, this is already being done, f.e. by some modern web browsers. Use a wireshark or tcpdump to inspect the packets and see for Yourself.

There's more than that. TCP socket takes time to set up (SYN, some time, SYN+ACK, some time, ACK...). Someone thought it's a waste to reset the connection after each request, so some modern HTTP servers and clients use Connection: keep-alive to indicate that they wish to reuse the connection.

I am sorry but I think Your ideas are great, however You can find them in RFC's. Keep thinking though, I am sure one day You'll invent something brilliant. See f.e. here for an optimized bitorrent client.