Skip to main content

Deepgram

Deepgram offers a range of English-speaking voices for its text-to-speech API, each designed to produce natural-sounding speech output in an array of different accents and speaking styles.

Deepgram's voices are promised to have human-like tones, rhythm, and emotion, lower than 250 ms latency, and are optimized for high-throughput applications.

Consult Deepgram's TTS models guide for more information and samples for supported voices.

Voice IDs

Copy the voice ID from the Values column of Deepgram's Voice Selection reference. Prepend deepgram. and the string is ready for use. For example: deepgram.aura-athena-en


Examples

Learn how to use Deepgram voices on the SignalWire platform.

Use the languages SWML method to set one or more voices for an AI agent.

version: 1.0.0
sections:
main:
- ai:
prompt:
text: Have an open-ended conversation about flowers.
languages:
- name: English
code: en-US
voice: deepgram.aura-asteria-en

Alternatively, use the say_voice parameter of the play SWML method to select a voice for basic TTS.

version: 1.0.0
sections:
main:
- set:
say_voice: "deepgram.aura-asteria-en"
- play: "say:Greetings. This is the Asteria voice from Deepgram's Aura text-to-speech model."