Skip to main content
Fireworks AI is the best platform for building AI product experiences with open source AI models. You can run and customize AI models with just a few lines of code! Using the API, you can access popular open-source models like Llama, DeepSeek, etc. The example below generates text output through an OpenAI-compatible chat completions API endpoint. In this guide, you will get an API key, set up your development environment, and call the Fireworks API with an API Key.

Get an API key

Sign up or login to your Fireworks account. Generate an API key by navigating to the API Keys page and click on ‘Create API key’. Store the API Key in a safe location.

Set up your developer environment & call the Fireworks API

  • Python (Fireworks)
  • Python (OpenAI)
  • JavaScript (OpenAI)
  • cURL
This is the recommended way to get started: This Python (Fireworks) tab uses our Fireworks Build SDK for the best performance and developer experience. See our Client-side performance optimization guide for more details.
1

Install SDK

Before installing, ensure that you have the right version of Python installed. Optionally you might want to setup a virtual environment too.
pip install --upgrade fireworks-ai
The Fireworks Build SDK provides a declarative way to work with Fireworks resources and is OpenAI API Compatible.
2

Configure API Key

Step-by-step instructions for setting an environment variable for respective OS platforms:
Depending on your shell, you’ll need to edit either ~/.bash_profile for Bash or ~/.zshrc for Zsh. You can do this by running the command:
vim ~/.bash_profile
Add a new line to the file with the following:
export FIREWORKS_API_KEY="<API_KEY>"
After saving the file, you’ll need to apply the changes by either restarting your terminal session or running depending on the file you edited.
source ~/.bash_profile
You can verify that the variable has been set correctly by running echo $FIREWORKS_API_KEY
You can open Command Prompt by searching for it in the Windows search bar or by pressing Win + R, typing cmd, and pressing Enter.
setx FIREWORKS_API_KEY "<API_KEY>"
To verify that the variable has been set correctly, you can close and reopen Command Prompt and type:
echo %FIREWORKS_API_KEY%
3

Sending the first API Request

You can quickly instantiate the LLM class and call the Fireworks API. The Build SDK handles deployment management automatically.
from fireworks import LLM

# Basic usage - SDK automatically selects optimal deployment type
llm = LLM(model="llama4-maverick-instruct-basic", deployment_type="auto")

response = llm.chat.completions.create(
    messages=[{"role": "user", "content": "Say this is a test"}]
)

print(response.choices[0].message.content)
You can also pass the API key directly to the LLM constructor: LLM(model="llama4-maverick-instruct-basic", deployment_type="auto", api_key="<FIREWORKS_API_KEY>")

Explore further

I