In addition to the predefined set of models already available on Fireworks and models you fine-tune on the Fireworks platform, you can also upload your own custom models. Both custom base models and PEFT addons are supported.

Custom PEFT addons

Requirements

Your custom PEFT addon must contain the following files:

  • adapter_config.json - The Hugging Face adapter configuration file.
  • adapter_model.bin or adapter_model.safetensors - The saved addon file.

The adapter_config.json must contain the following fields:

  • r - The number of LoRA ranks. Must be between an integer between 4 and 64, inclusive.
  • target_modules - A list of target modules. Currently the following target modules are supported:
    • q_proj
    • k_proj
    • v_proj
    • o_proj
    • up_proj or w1
    • down_proj or w2
    • gate_proj or w3
    • block_sparse_moe.gate

Additional fields may be specified but are ignored.

Enabling chat completions

To enable the chat completions API for your PEFT addon, add a fireworks.json file directory containing:

{
  "conversation_config": {
    "style": "jinja",
    "args": {
      "template": "<YOUR_JINJA_TEMPLATE>"
    }
  }
}

Uploading the model

To upload a PEFT addon, run the following command. The MODEL_ID is an arbitrary resource ID to refer to the model within Fireworks.

NOTE: Only some base models support PEFT addons.

firectl create model <MODEL_ID> /path/to/files/ --base-model "accounts/fireworks/models/<BASE_MODEL_ID>"

Custom base models

Requirements

Fireworks currently supports the following model architectures:

The model files you will need to provide depend on the model architecture. In general, you will need the following files:

  • Model configuration: config.json.
    Fireworks does not support the quantization_config option in config.json.
  • Model weights, in one of the following formats:
    • *.safetensors
    • *.bin
  • Weights index:*.index.json
  • Tokenizer file(s), e.g.
    • tokenizer.model
    • tokenizer.json
    • tokenizer_config.json

If the requisite files are not present, model deployment may fail.

Enabling chat completions

To enable the chat completions API for your custom base model, ensure your tokenizer_config.json contains a chat_template field. See the Hugging Face guide on Templates for Chat Models for details.

Uploading the model

To upload a custom base model, run the following command.

firectl create model <MODEL_ID> /path/to/files/

Deploying

A model cannot be used for inference until it is deployed. See the Deploying models guide to deploy the model.