First off, has anyone done a performance comparison for throughput/latency between a GRPC client-server implementation v/s a websocket+protobuf client-server implementation? Or at least something similary.
In order to reach this goal, I am trying out the example JAVA helloworld grpc client-server and trying to compare the latency of the response with a similar websocket client-server. Currently i am trying this out with both client and server on my local machine.
The websocket client-server has a simple while loop on the server side. For the grpc server i notice that it uses an asynchronous execution model. I suspect it creates a new thread for each client request, resulting in additional processing overheads. For instance, the websocket response latency i am measuring is in the order of 6-7 ms and the grpc example is showing a latency of about 600-700ms, accounting for protobuf overhead.
In order to do a similar comparison for grpc, is there a way to run the grpc server synchronously? I want to be able to eliminate the overhead of the thread creation/dispatch and other such internal overhead introduced by the asynchronous handling.
I do understand that there is a protobuf overhead involved in grpc that is not there in my websocket client-server example. However i can account for that by measuring the overhead introduced by protobuf processing.
Also, if i cannot run the grpc server synchronously, can i at least measure the thread dispatch/asynchronous processing overhead?
I am relatively new to JAVA, so pardon my ignorance.