Skip to main content
POST
/
v1
/
accounts
/
{account_id}
/
supervisedFineTuningJobs
Create Supervised Fine-tuning Job
curl --request POST \
  --url https://api.fireworks.ai/v1/accounts/{account_id}/supervisedFineTuningJobs \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "dataset": "<string>",
  "displayName": "<string>",
  "awsS3Config": {
    "credentialsSecret": "<string>",
    "iamRoleArn": "<string>"
  },
  "azureBlobStorageConfig": {
    "credentialsSecret": "<string>",
    "managedIdentityClientId": "<string>",
    "tenantId": "<string>"
  },
  "outputModel": "<string>",
  "baseModel": "<string>",
  "warmStartFrom": "<string>",
  "jinjaTemplate": "<string>",
  "earlyStop": true,
  "epochs": 123,
  "learningRate": 123,
  "maxContextLength": 123,
  "loraRank": 123,
  "wandbConfig": {
    "enabled": true,
    "apiKey": "<string>",
    "project": "<string>",
    "entity": "<string>",
    "runId": "<string>"
  },
  "evaluationDataset": "<string>",
  "isTurbo": true,
  "evalAutoCarveout": true,
  "nodes": 123,
  "batchSize": 123,
  "mtpEnabled": true,
  "mtpNumDraftTokens": 123,
  "mtpFreezeBaseModel": true,
  "metricsFileSignedUrl": "<string>",
  "gradientAccumulationSteps": 123,
  "learningRateWarmupSteps": 123,
  "batchSizeSamples": 123,
  "optimizerWeightDecay": 123,
  "purpose": "PURPOSE_UNSPECIFIED"
}
'
{
  "dataset": "<string>",
  "name": "<string>",
  "displayName": "<string>",
  "createTime": "2023-11-07T05:31:56Z",
  "completedTime": "2023-11-07T05:31:56Z",
  "awsS3Config": {
    "credentialsSecret": "<string>",
    "iamRoleArn": "<string>"
  },
  "azureBlobStorageConfig": {
    "credentialsSecret": "<string>",
    "managedIdentityClientId": "<string>",
    "tenantId": "<string>"
  },
  "state": "JOB_STATE_UNSPECIFIED",
  "status": {
    "code": "OK",
    "message": "<string>"
  },
  "createdBy": "<string>",
  "outputModel": "<string>",
  "baseModel": "<string>",
  "warmStartFrom": "<string>",
  "jinjaTemplate": "<string>",
  "earlyStop": true,
  "epochs": 123,
  "learningRate": 123,
  "maxContextLength": 123,
  "loraRank": 123,
  "wandbConfig": {
    "enabled": true,
    "apiKey": "<string>",
    "project": "<string>",
    "entity": "<string>",
    "runId": "<string>",
    "url": "<string>"
  },
  "evaluationDataset": "<string>",
  "isTurbo": true,
  "evalAutoCarveout": true,
  "updateTime": "2023-11-07T05:31:56Z",
  "nodes": 123,
  "batchSize": 123,
  "mtpEnabled": true,
  "mtpNumDraftTokens": 123,
  "mtpFreezeBaseModel": true,
  "jobProgress": {
    "percent": 123,
    "epoch": 123,
    "totalInputRequests": 123,
    "totalProcessedRequests": 123,
    "successfullyProcessedRequests": 123,
    "failedRequests": 123,
    "outputRows": 123,
    "inputTokens": 123,
    "outputTokens": 123,
    "cachedInputTokenCount": 123
  },
  "metricsFileSignedUrl": "<string>",
  "trainerLogsSignedUrl": "<string>",
  "gradientAccumulationSteps": 123,
  "learningRateWarmupSteps": 123,
  "batchSizeSamples": 123,
  "estimatedCost": {
    "currencyCode": "<string>",
    "units": "<string>",
    "nanos": 123
  },
  "optimizerWeightDecay": 123,
  "purpose": "PURPOSE_UNSPECIFIED"
}

Documentation Index

Fetch the complete documentation index at: https://docs.fireworks.ai/llms.txt

Use this file to discover all available pages before exploring further.

Authorizations

Authorization
string
header
required

Bearer authentication using your Fireworks API key. Format: Bearer <API_KEY>

Path Parameters

account_id
string
required

The Account Id

Query Parameters

supervisedFineTuningJobId
string

ID of the supervised fine-tuning job, a random UUID will be generated if not specified.

Body

application/json
dataset
string
required

The name of the dataset used for training.

displayName
string
awsS3Config
object

The AWS configuration for S3 dataset access.

azureBlobStorageConfig
object

The Azure configuration for Azure Blob Storage dataset access.

outputModel
string

The model ID to be assigned to the resulting fine-tuned model. If not specified, the job ID will be used.

baseModel
string

The name of the base model to be fine-tuned Only one of 'base_model' or 'warm_start_from' should be specified.

warmStartFrom
string

The PEFT addon model in Fireworks format to be fine-tuned from Only one of 'base_model' or 'warm_start_from' should be specified.

jinjaTemplate
string
earlyStop
boolean

Whether to stop training early if the validation loss does not improve.

epochs
integer<int32>

The number of epochs to train for.

learningRate
number<float>

The learning rate used for training.

maxContextLength
integer<int32>

The maximum context length to use with the model.

loraRank
integer<int32>

The rank of the LoRA layers.

wandbConfig
object

The Weights & Biases team/user account for logging training progress.

evaluationDataset
string

The name of a separate dataset to use for evaluation.

isTurbo
boolean

Whether to run the fine-tuning job in turbo mode.

evalAutoCarveout
boolean

Whether to auto-carve the dataset for eval.

nodes
integer<int32>

Deprecated: multi-node scheduling is now handled by the cookbook orchestrator in V2 workflows. This field is ignored for V2 jobs and will be removed in a future release.

batchSize
integer<int32>
mtpEnabled
boolean

Deprecated: MTP is not supported in V2 training. These fields are retained for V1 Helm-based SFT backward compatibility only.

mtpNumDraftTokens
integer<int32>

Deprecated: see mtp_enabled.

mtpFreezeBaseModel
boolean

Deprecated: see mtp_enabled.

metricsFileSignedUrl
string
gradientAccumulationSteps
integer<int32>
learningRateWarmupSteps
integer<int32>
batchSizeSamples
integer<int32>

The number of samples per gradient batch.

optimizerWeightDecay
number<float>

Weight decay (L2 regularization) for optimizer.

purpose
enum<string>
default:PURPOSE_UNSPECIFIED

Scheduling purpose for this job.

Available options:
PURPOSE_UNSPECIFIED,
PURPOSE_PILOT

Response

200 - application/json

A successful response.

dataset
string
required

The name of the dataset used for training.

name
string
read-only
displayName
string
createTime
string<date-time>
read-only
completedTime
string<date-time>
read-only
awsS3Config
object

The AWS configuration for S3 dataset access.

azureBlobStorageConfig
object

The Azure configuration for Azure Blob Storage dataset access.

state
enum<string>
default:JOB_STATE_UNSPECIFIED
read-only

JobState represents the state an asynchronous job can be in.

  • JOB_STATE_PAUSED: Job is paused, typically due to account suspension or manual intervention.
  • JOB_STATE_DELETED: Job has been deleted.
Available options:
JOB_STATE_UNSPECIFIED,
JOB_STATE_CREATING,
JOB_STATE_RUNNING,
JOB_STATE_COMPLETED,
JOB_STATE_FAILED,
JOB_STATE_CANCELLED,
JOB_STATE_DELETING,
JOB_STATE_WRITING_RESULTS,
JOB_STATE_VALIDATING,
JOB_STATE_DELETING_CLEANING_UP,
JOB_STATE_PENDING,
JOB_STATE_EXPIRED,
JOB_STATE_RE_QUEUEING,
JOB_STATE_CREATING_INPUT_DATASET,
JOB_STATE_IDLE,
JOB_STATE_CANCELLING,
JOB_STATE_EARLY_STOPPED,
JOB_STATE_PAUSED,
JOB_STATE_DELETED
status
Mimics [https://github.com/googleapis/googleapis/blob/master/google/rpc/status.proto] · object
read-only
createdBy
string
read-only

The email address of the user who initiated this fine-tuning job.

outputModel
string

The model ID to be assigned to the resulting fine-tuned model. If not specified, the job ID will be used.

baseModel
string

The name of the base model to be fine-tuned Only one of 'base_model' or 'warm_start_from' should be specified.

warmStartFrom
string

The PEFT addon model in Fireworks format to be fine-tuned from Only one of 'base_model' or 'warm_start_from' should be specified.

jinjaTemplate
string
earlyStop
boolean

Whether to stop training early if the validation loss does not improve.

epochs
integer<int32>

The number of epochs to train for.

learningRate
number<float>

The learning rate used for training.

maxContextLength
integer<int32>

The maximum context length to use with the model.

loraRank
integer<int32>

The rank of the LoRA layers.

wandbConfig
object

The Weights & Biases team/user account for logging training progress.

evaluationDataset
string

The name of a separate dataset to use for evaluation.

isTurbo
boolean

Whether to run the fine-tuning job in turbo mode.

evalAutoCarveout
boolean

Whether to auto-carve the dataset for eval.

updateTime
string<date-time>
read-only

The update time for the supervised fine-tuning job.

nodes
integer<int32>

Deprecated: multi-node scheduling is now handled by the cookbook orchestrator in V2 workflows. This field is ignored for V2 jobs and will be removed in a future release.

batchSize
integer<int32>
mtpEnabled
boolean

Deprecated: MTP is not supported in V2 training. These fields are retained for V1 Helm-based SFT backward compatibility only.

mtpNumDraftTokens
integer<int32>

Deprecated: see mtp_enabled.

mtpFreezeBaseModel
boolean

Deprecated: see mtp_enabled.

jobProgress
object
read-only

Job progress.

metricsFileSignedUrl
string
trainerLogsSignedUrl
string
read-only

The signed URL for the trainer logs file (stdout/stderr). Only populated if the account has trainer log reading enabled.

gradientAccumulationSteps
integer<int32>
learningRateWarmupSteps
integer<int32>
batchSizeSamples
integer<int32>

The number of samples per gradient batch.

estimatedCost
object
read-only

The estimated cost of the job.

optimizerWeightDecay
number<float>

Weight decay (L2 regularization) for optimizer.

purpose
enum<string>
default:PURPOSE_UNSPECIFIED

Scheduling purpose for this job.

Available options:
PURPOSE_UNSPECIFIED,
PURPOSE_PILOT