Investing.com -- OpenAI has unveiled a new suite of advanced audio models designed to enhance voice agents with improved speech-to-text and text-to-speech capabilities. These models are now available to developers worldwide, building upon OpenAI’s previous agent technologies like Operator and Deep Research.
The new speech-to-text models, gpt-4o-transcribe and gpt-4o-mini-transcribe, set new benchmarks for accuracy even in challenging conditions with noise, accents, and varying speech speeds. They show improved Word Error Rate performance compared to existing Whisper models, making them ideal for applications like call centers and meeting transcription.
For text-to-speech, the new gpt-4o-mini-tts model offers unprecedented "steerability," allowing developers to specify how content should be spoken. Developers can now create voice agents that speak like sympathetic customer service representatives or expressive storytellers, though currently limited to preset artificial voices.
These audio models were extensively pretrained on specialized audio datasets using the GPT-4o architectures to optimize performance. OpenAI has also enhanced its distillation techniques to transfer knowledge from larger models to smaller, more efficient ones.
The models are accessible through APIs with simplified integration options for developers already working with text-based models. OpenAI plans to continue improving these technologies while exploring custom voice options and expanding into other modalities like video for more personalized multimodal experiences.