I have a web application that uses WebSockets to communicate between browser and server. When serving as ws, everything works as intended. If I change the protocol to wss, things mostly work as expected (the majority of messages passed from client to server, or vice versa, are received), but I occasionally one of the following errors in the Chrome console:
"Could not decode a text frame as UTF-8."
or
"Invalid frame header"
...at which point Chrome releases the connection.
I have observed this both when serving wss directly from the server (runs on .NET, uses SuperWebSocket), and in a configuration where the server uses ws and Apache's mod_proxy_wstunnel to reverse proxy to this using the wss protocol. I have also set up a simple "echo" server under the same Apache configuration, and don't observe the issue; this leads me to believe there's something funny about the data we're passing to the SuperWebSocket API. (The messages which cause the error are valid UTF-8, and again, don't see this issue when serving over ws.)
I'm at a loss for how a protocol change would cause such an issue to occur, which leads me to my question:
Are there cases where a WebSocket frame might be valid when sent without TLS but would become corrupted when sent with TLS?