Confirm model support for fine-tuning
Tunnable
tag in the model library or by using:Tunable: true
.Tunable: false
) but still list support for LoRA (Supports Lora: true
). This means that users can tune a LoRA for this base model on a separate platform and upload it to Fireworks for inference. Consult importing fine-tuned models for more information.Prepare a dataset
.jsonl
role
: one of system
, user
, or assistant
. A message with the system
role is optional, but if specified, it must be the first message of the conversationcontent
: a string representing the message contentCreate and upload a dataset
firectl
, Restful API
, builder SDK
or UI
.Create Dataset
and follow the wizard.
UI
is more suitable for smaller datasets < 500MB
while firectl
might work better for bigger datasets.Ensure the dataset ID conforms to the resource id restrictions.Launch a fine-tuning job
UI
.Fine-Tuning
tab, click Fine-Tune a Model
and follow the wizard from there. You can even pick a LoRA model to start the fine-tuning for continued training.UI
, once the job is created, it will show in the list of jobs. Clicking to view the job details to monitor the job progress.firectl
, you can monitor the progress of the tuning job by runningServerless
tag in the model library or use:
Deployed Model Refs
to see if there exists a Fireworks-owned deployment (accounts/fireworks/deployments/{SOME_DEPLOYMENT_ID}
) and Supports LoRA: true
If this is the case, then you can use
epochs
and learning rate
, we recommend using default settings and only changing hyperparameters if results are not as desired. All tuning options must be specified via command line flags as shown in the below example:
evaluation_dataset
: The ID of a separate dataset to use for evaluation. Must be pre-uploaded via firectl
Python builder SDK
references
Restful API
references
firectl
references