Some speech-to-text services, like Google Speech-to-Text, offer speaker differentiation via diarization which attempts to identify and separate multiple speakers on a single audio recording. This is often needed when multiple speakers are in a meeting room sharing a single microphone.
Is there an algorithm and implementation to calculate the correctness of speaker separation?
This would be used in conjunction with Word Error Rate which is often used to test correctness of baseline transcription.