I'm trying to create a generic TLS over TCP socket in C++, using Openssl. The socket would be used in programs running a select loop and utilizing non-blocking I/O.
I'm concerned about the case where the underlying TCP socket becomes readable after the previous SSL_get_error
call returned SSL_ERROR_WANT_WRITE
. I can think of two situations where this may occur:
- The local application and remote application simultaneously decide to send large amounts of data. Both applications call
SSL_write
simultaneously and subsequentSSL_get_error
calls on both applications returnSSL_ERROR_WANT_WRITE
. The TCP packets sent from both applications cross on the wire. The local application's TCP socket is now readable after the previousSSL_get_error
call returnedSSL_ERROR_WANT_WRITE
. - As above, except the remote Openssl library decides to perform SSL re-negotiation in the
SSL_write
call, prior to writing any application data. This simply changes the meaning of the data received on the local application's TCP socket from encrypted application data to session re-negotiation data.
How should the local application handle this data? Should it:
- call
SSL_write
as it is currently mid-write? - call
SSL_read
as would happen if the socket is idle?
SSL_write
openssl.6102.n7.nabble.com/… - Clivest