Create Supervised Fine-tuning Job
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Path Parameters
The Account Id
Query Parameters
ID of the supervised fine-tuning job, a random UUID will be generated if not specified.
Body
The name of the dataset used for training.
JobState represents the state an asynchronous job can be in.
JOB_STATE_UNSPECIFIED
, JOB_STATE_CREATING
, JOB_STATE_RUNNING
, JOB_STATE_COMPLETED
, JOB_STATE_FAILED
, JOB_STATE_CANCELLED
, JOB_STATE_DELETING
, JOB_STATE_WRITING_RESULTS
, JOB_STATE_VALIDATING
, JOB_STATE_ROLLOUT
, JOB_STATE_EVALUATION
The model ID to be assigned to the resulting fine-tuned model. If not specified, the job ID will be used.
The name of the base model to be fine-tuned Only one of 'base_model' or 'warm_start_from' should be specified.
The PEFT addon model in Fireworks format to be fine-tuned from Only one of 'base_model' or 'warm_start_from' should be specified.
Whether to stop training early if the validation loss does not improve.
The number of epochs to train for.
The learning rate used for training.
The maximum context length to use with the model.
The rank of the LoRA layers.
The Weights & Biases team/user account for logging training progress.
The name of a separate dataset to use for evaluation.
Whether to run the fine-tuning job in turbo mode.
Response
The name of the dataset used for training.
JobState represents the state an asynchronous job can be in.
JOB_STATE_UNSPECIFIED
, JOB_STATE_CREATING
, JOB_STATE_RUNNING
, JOB_STATE_COMPLETED
, JOB_STATE_FAILED
, JOB_STATE_CANCELLED
, JOB_STATE_DELETING
, JOB_STATE_WRITING_RESULTS
, JOB_STATE_VALIDATING
, JOB_STATE_ROLLOUT
, JOB_STATE_EVALUATION
The email address of the user who initiated this fine-tuning job.
The model ID to be assigned to the resulting fine-tuned model. If not specified, the job ID will be used.
The name of the base model to be fine-tuned Only one of 'base_model' or 'warm_start_from' should be specified.
The PEFT addon model in Fireworks format to be fine-tuned from Only one of 'base_model' or 'warm_start_from' should be specified.
Whether to stop training early if the validation loss does not improve.
The number of epochs to train for.
The learning rate used for training.
The maximum context length to use with the model.
The rank of the LoRA layers.
The Weights & Biases team/user account for logging training progress.
The name of a separate dataset to use for evaluation.
Whether to run the fine-tuning job in turbo mode.
Was this page helpful?