I am trying to get speaker labels through IBM watson speech to text api. In my final output I want it to display the transcript, the confidence and the speaker labels for the entire audio. My code is below:
import json
from os.path import join, dirname
from ibm_watson import SpeechToTextV1
from ibm_watson.websocket import RecognizeCallback, AudioSource
import threading
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
import pandas as pd
authenticator = IAMAuthenticator('rXXXYYZZ')
service = SpeechToTextV1(authenticator=authenticator)
service.set_service_url('https://api.us-east.speech-to-text.watson.cloud.ibm.com')
models = service.list_models().get_result()
#print(json.dumps(models, indent=2))
model = service.get_model('en-US_BroadbandModel').get_result()
#print(json.dumps(model, indent=2))
with open(join(dirname('__file__'), 'testvoicejen.wav'),
'rb') as audio_file:
# print(json.dumps(
output = service.recognize(
audio=audio_file,
speaker_labels=True,
content_type='audio/wav',
#timestamps=True,
#word_confidence=True,
model='en-US_NarrowbandModel',
continuous=True).get_result(),
indent=2
df = pd.DataFrame([i for elts in output for alts in elts['results'] for i in alts['alternatives']])
However, the output of df is:
df
Out[22]:
timestamps ... transcript
0 [[thank, 3.88, 4.04], [you, 4.04, 4.13], [for,... ... thank you for calling my name is Britney and h...
1 [[thank, 30.21, 30.56], [you, 30.56, 30.74], [... ... thank you %HESITATION and then %HESITATION you..
As you can observe, I do get the transcript successfully, however, instead of speaker diarization or labels, I get timestamps. A speaker label would be something like the below:
from": 0.68,
"to": 1.19,
"speaker": 2
How do I get this?