Specify endpoint and API key
Using the OpenAI client
You can use the OpenAI client by initializing it with your Fireworks configuration:Using environment variables
Alternative approach
Usage
Use OpenAI’s SDK how you’d normally would. Just ensure that themodel parameter refers to one of Fireworks models.
Completion
Simple completion API that doesn’t modify provided prompt in any way:Chat Completion
Works best for models fine-tuned for conversation (e.g. llama*-chat variants):API compatibility
Differences
The following options have minor differences:max_tokens: behaves differently if the model context length is exceeded. If the length ofpromptormessagesplusmax_tokensis higher than the model’s context window,max_tokenswill be adjusted lower accordingly. OpenAI returns an invalid request error in this situation. Control this behavior with thecontext_length_exceeded_behaviorparameter:truncate(default): Automatically adjustsmax_tokensto fit within the context windowerror: Returns an error like OpenAI does
Token usage for streaming responses
OpenAI API returns usage stats (number of tokens in prompt and completion) for non-streaming responses but doesn’t for the streaming ones (see forum post). Fireworks API returns usage stats in both cases. For streaming responses, theusage field is returned in the very last chunk on the response (i.e. the one having finish_reason set). For example:
cURL
Note, that if you’re using OpenAI SDK, they
usage field won’t be listed in the SDK’s structure definition. But it can be accessed directly. For example: