API Reference
BaseCompletion Objects
Base class for handling completions. This class provides shared logic for creating completions,
both synchronously and asynchronously, and both streaming and non-streaming.
Attributes:
endpoint
str - API endpoint for the completion request.response_class
Type - Class used for parsing the non-streaming response.stream_response_class
Type - Class used for parsing the streaming response.
create
Create a completion or chat completion.
Arguments:
model
str - Model name to use for the completion.prompt_or_messages
Union[str, List[ChatMessage]] - The prompt for Completion or a list of chat messages for ChatCompletion. If not specified, must specify eitherprompt
ormessages
in kwargs.request_timeout
int, optional - Request timeout in seconds. Defaults to 600.stream
bool, optional - Whether to use streaming or not. Defaults to False.**kwargs
- Additional keyword arguments.
Returns:
Union[CompletionResponse, Generator[CompletionStreamResponse, None, None]]
:
Depending on the stream
argument, either returns a CompletionResponse
or a generator yielding CompletionStreamResponse.
acreate
Asynchronously create a completion.
Arguments:
model
str - Model name to use for the completion.request_timeout
int, optional - Request timeout in seconds. Defaults to 600.stream
bool, optional - Whether to use streaming or not. Defaults to False.**kwargs
- Additional keyword arguments.
Returns:
Union[CompletionResponse, AsyncGenerator[CompletionStreamResponse, None]]
:
Depending on the stream
argument, either returns a CompletionResponse or an async generator yielding CompletionStreamResponse.
completion
Completion Objects
Class for handling text completions.
chat_completion
ChatCompletion Objects
Class for handling chat completions.
api
Choice Objects
A completion choice.
Attributes:
index
int - The index of the completion choice.text
str - The completion response.logprobs
float, optional - The log probabilities of the most likely tokens.finish_reason
str - The reason the model stopped generating tokens. This will be “stop” if the model hit a natural stop point or a provided stop sequence, or “length” if the maximum number of tokens specified in the request was reached.
CompletionResponse Objects
The response message from a /v1/completions call.
Attributes:
id
str - A unique identifier of the response.object
str - The object type, which is always “text_completion”.created
int - The Unix time in seconds when the response was generated.choices
List[Choice] - The list of generated completion choices.
CompletionResponseStreamChoice Objects
A streamed completion choice.
Attributes:
index
int - The index of the completion choice.text
str - The completion response.logprobs
float, optional - The log probabilities of the most likely tokens.finish_reason
str - The reason the model stopped generating tokens. This will be “stop” if the model hit a natural stop point or a provided stop sequence, or “length” if the maximum number of tokens specified in the request was reached.
CompletionStreamResponse Objects
The streamed response message from a /v1/completions call.
Attributes:
id
str - A unique identifier of the response.object
str - The object type, which is always “text_completion”.created
int - The Unix time in seconds when the response was generated.model
str - The model used for the chat completion.
choices (List[CompletionResponseStreamChoice]):
The list of streamed completion choices.
Model Objects
A model deployed to the Fireworks platform.
Attributes:
id
str - The model name.object
str - The object type, which is always “model”.created
int - The Unix time in seconds when the model was generated.
ListModelsResponse Objects
The response message from a /v1/models call.
Attributes:
object
str - The object type, which is always “list”.data
List[Model] - The list of models.
ChatMessage Objects
A chat completion message.
Attributes:
role
str - The role of the author of this message.content
str - The contents of the message.
ChatCompletionResponseChoice Objects
A chat completion choice generated by a chat model.
Attributes:
index
int - The index of the chat completion choice.message
ChatMessage - The chat completion message.finish_reason
Optional[str] - The reason the model stopped generating tokens. This will be “stop” if the model hit a natural stop point or a provided stop sequence, or “length” if the maximum number of tokens specified in the request was reached.
UsageInfo Objects
Usage statistics.
Attributes:
prompt_tokens
int - The number of tokens in the prompt.total_tokens
int - The total number of tokens used in the request (prompt + completion).completion_tokens
Optional[int] - The number of tokens in the generated completion.
ChatCompletionResponse Objects
The response message from a /v1/chat/completions call.
Attributes:
id
str - A unique identifier of the response.object
str - The object type, which is always “chat.completion”.created
int - The Unix time in seconds when the response was generated.model
str - The model used for the chat completion.choices
List[ChatCompletionResponseChoice] - The list of chat completion choices.usage
UsageInfo - Usage statistics for the chat completion.
DeltaMessage Objects
A message delta.
Attributes:
role
str - The role of the author of this message.content
str - The contents of the chunk message.
ChatCompletionResponseStreamChoice Objects
A streamed chat completion choice.
Attributes:
index
int - The index of the chat completion choice.delta
DeltaMessage - The message delta.finish_reason
str - The reason the model stopped generating tokens. This will be “stop” if the model hit a natural stop point or a provided stop sequence, or “length” if the maximum number of tokens specified in the request was reached.
ChatCompletionStreamResponse Objects
The streamed response message from a /v1/chat/completions call.
Attributes:
id
str - A unique identifier of the response.object
str - The object type, which is always “chat.completion”.created
int - The Unix time in seconds when the response was generated.model
str - The model used for the chat completion.
choices (List[ChatCompletionResponseStreamChoice]):
The list of streamed chat completion choices.
model
Model Objects
list
Returns a list of available models.
Arguments:
request_timeout
int, optional - The request timeout in seconds. Default is 60.
Returns:
ListModelsResponse
- A list of available models.
log
set_console_log_level
Controls console logging.
Arguments:
level
- the minimum level that prints out to console.
Supported values: [CRITICAL, FATAL, ERROR, WARN,
WARNING, INFO, DEBUG]
error
PermissionError Objects
A permission denied error.
InvalidRequestError Objects
A invalid request error.
AuthenticationError Objects
A authentication error.
RateLimitError Objects
A rate limit error.
InternalServerError Objects
An internal server error.
ServiceUnavailableError Objects
A service unavailable error.
Was this page helpful?