As per Closing the Connection chapter of the WebSocket protocol specification:
To Close the WebSocket Connection, an endpoint closes the
underlying TCP connection. An endpoint SHOULD use a method that
cleanly closes the TCP connection, as well as the TLS session, if
applicable, discarding any trailing bytes that may have been
received. An endpoint MAY close the connection via any means
available when necessary, such as when under attack.
The underlying TCP connection, in most normal cases, SHOULD be closed
first by the server, so that it holds the TIME_WAIT state and not the
client (as this would prevent it from re-opening the connection for 2
maximum segment lifetimes (2MSL), while there is no corresponding
server impact as a TIME_WAIT connection is immediately reopened upon
a new SYN with a higher seq number). In abnormal cases (such as not
having received a TCP Close from the server after a reasonable amount
of time) a client MAY initiate the TCP Close. As such, when a server
is instructed to Close the WebSocket Connection it SHOULD initiate
a TCP Close immediately, and when a client is instructed to do the
same, it SHOULD wait for a TCP Close from the server.
So you need to check your endpoint logs in order to determine the reason (unless you're not closing the connection on your own)
It might be the case that the endpoint is not properly configured for high loads and doesn't accept high number of connections or it's get overloaded in terms of CPU, RAM and Network IO so it worth checking these metrics using i.e. JMeter PerfMon Plugin