Transcribe audio
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
The input audio file to transcribe. Common file formats such as mp3, flac, and wav are supported. Note that the audio will be resampled to 16kHz, downmixed to mono, and reformatted to 16-bit signed little-endian format before transcription. Pre-converting the file before sending it to the API can improve runtime performance.
String name of the ASR model to use. Currently whisper-v3
is supported.
String name of the voice activity detection (VAD) model to use. Can be one of silero
, or whisperx-pyannet
.
String name of the alignment model to use. Can be one of tdnn_ffn
, mms_fa
, or gentle
.
The target language for transcription. The set of supported target languages can be found here.
The input prompt with which to prime transcription. This can be used, for example, to continue a prior transcription given new audio data.
Sampling temperature to use when decoding text tokens during transcription.
The format in which to return the response. Can be one of json
, text
, srt
, verbose_json
, or vtt
.
The timestamp granularities to populate for this transcription.
response_format
must be set "verbose_json" to use timestamp granularities.
Either or both of these options are supported.
Can be one of word
, or segment
.
If not present, defaults to segment
.
Audio preprocessing mode. Currently supported:
none
to skip audio preprocessing.dynamic
for arbitrary audio content with variable loudness.soft_dynamic
for speech intense recording such as podcasts and voice-overs.bass_dynamic
for boosting lower frequencies;
Response
Was this page helpful?