Uploading a custom model
In addition to the predefined set of models already available on Fireworks and models you fine-tune on the Fireworks platform, you can also upload your own custom models. Both custom base models and LoRA addons are supported.
Custom LoRA addons
Requirements
Your custom LoRA addon must contain the following files:
adapter_config.json
- The Hugging Face adapter configuration file.adapter_model.bin
oradapter_model.safetensors
- The saved addon file.
The adapter_config.json
must contain the following fields:
r
- The number of LoRA ranks. Must be between an integer between 4 and 64, inclusive.target_modules
- A list of target modules. Currently the following target modules are supported:q_proj
k_proj
v_proj
o_proj
up_proj
orw1
down_proj
orw2
gate_proj
orw3
block_sparse_moe.gate
Additional fields may be specified but are ignored.
Enabling chat completions
To enable the chat completions API for your LoRA addon, add a fireworks.json
file directory containing:
Uploading the model
To upload a LoRA addon, run the following command. The MODEL_ID is an arbitrary resource ID to refer to the model within Fireworks.
NOTE: Only some base models support LoRA addons.
Custom base models
Requirements
Fireworks currently supports the following model architectures:
- Gemma
- Phi, Phi-3
- Llama 1,2,3,3.1
- LLaVa
- Mistral & Mixtral
- Qwen2
- StableLM
- Starcoder(GPTBigCode) & Starcoder2
- DeepSeek V1 & V2
- GPT NeoX
The model files you will need to provide depend on the model architecture. In general, you will need the following files:
- Model configuration:
config.json
.Fireworks does not support thequantization_config
option inconfig.json
. - Model weights, in one of the following formats:
*.safetensors
*.bin
- Weights index:
*.index.json
- Tokenizer file(s), e.g.
tokenizer.model
tokenizer.json
tokenizer_config.json
If the requisite files are not present, model deployment may fail.
Enabling chat completions
To enable the chat completions API for your custom base model, ensure your tokenizer_config.json
contains a
chat_template
field. See the Hugging Face guide on Templates for Chat Models
for details.
Uploading the model
To upload a custom base model, run the following command.
Deploying
A model cannot be used for inference until it is deployed. See the Deploying models guide to deploy the model.
Publishing
By default, all models you create are only visible to and deployable by users within your account. To publish a model so
anyone with a Fireworks account can deploy it, you can create it with the --public
flag. This will allow it to show up
in public model lists.
To unpublish the model, just run
Was this page helpful?