POST
/
audio
/
transcriptions
curl --request POST \
  --url https://api.fireworks.ai/inference/v1/audio/transcriptions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: multipart/form-data' \
  --form 'file=<string>' \
  --form 'model=<string>' \
  --form 'language=<string>' \
  --form 'prompt=<string>' \
  --form 'response_format=<string>' \
  --form temperature=0.5
{
  "text": "<string>"
}

Authorizations

Authorization
string
headerrequired

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

multipart/form-data
file
string
required

The input audio file to transcribe. Common file formats such as mp3, flac, and wav are supported. Note that the audio will be resampled to 16kHz, downmixed to mono, and reformatted to 16-bit signed little-endian format before transcription. Pre-converting the file before sending it to the API can improve runtime performance.

model
string
default: whisper-v3

String name of the ASR model to use. Currently "whisper-v3" is supported.

language
string | null

The target language for transcription. The set of supported target languages can be found here.

prompt
string | null

The input prompt with which to prime transcription. This can be used, for example, to continue a prior transcription given new audio data.

response_format
string
default: json

The format in which to return the response. Can be one of json, text, srt, verbose_json, or vtt.

temperature
number
default: 0

Sampling temperature to use when decoding text tokens during transcription.

Response

200 - application/json
text
string
required