On one end, you have TCP, which guarantees that packets arrive and that they arrive in order. It's also designed for the commodity Internet, with congestion control algorithms that "play nice" in traffic. On the other end of the spectrum, you have UDP, which doesn't guarantee arrival time and order of packets, and it allows you to send large data to a receiver. Somewhere in the middle, you have reliable UDP-based programs, such as UDT, that offer customized congestion control algorithms and reliability, but with greater speed and flexibility.
However, what I'm looking for is the capability to send large chunks of data over UDP (greater than the 64k datagram size of UDP), but without a concern for reliability of each individual datagram. The idea is that the large data is broken down into datagrams of a specified size (<= 64,000 bytes), probably with some header data stuck on the front and sent over the network. On the receiving side, these datagrams are read in and stored. If a datagram doesn't arrive, all of the datagrams associated with that transfer are simply thrown out by the client.
Most of the "reliable UDP" implementations try to maintain reliability of each datagram, but I'm only interested in the whole, and if I don't get the whole, it doesn't matter - throw it all away and wait for the next. I'd have to dig deeper, but it might be possible with custom congestion control algorithms in UDT. However, are there any protocols with this approach?