List Fine-tuning Jobs
curl --request GET \
--url https://api.fireworks.ai/v1/accounts/{account_id}/fineTuningJobs \
--header 'Authorization: Bearer <token>'
{
"fineTuningJobs": [
{
"name": "<string>",
"displayName": "<string>",
"createTime": "2023-11-07T05:31:56Z",
"state": "STATE_UNSPECIFIED",
"dataset": "<string>",
"datasets": [
{
"dataset": "<string>"
}
],
"status": {
"code": "OK",
"message": "<string>"
},
"createdBy": "<string>",
"containerVersion": "<string>",
"modelId": "<string>",
"legacyJob": {},
"textCompletion": {
"inputTemplate": "<string>",
"outputTemplate": "<string>"
},
"textClassification": {
"text": "<string>",
"label": "<string>"
},
"conversation": {
"jinjaTemplate": "<string>"
},
"draftModelData": {
"deploymentName": "<string>",
"jinjaTemplate": "<string>",
"cleanupDeployment": true
},
"draftModel": {},
"genie": {
"pipelineName": "<string>"
},
"baseModel": "<string>",
"warmStartFrom": "<string>",
"epochs": 123,
"learningRate": 123,
"loraRank": 123,
"loraTargetModules": [
"<string>"
],
"batchSize": 123,
"microBatchSize": 123,
"maskToken": "<string>",
"padToken": "<string>",
"wandbUrl": "<string>",
"wandbEntity": "<string>",
"wandbApiKey": "<string>",
"wandbProject": "<string>",
"evaluation": true,
"evaluationSplit": 123,
"evaluationDataset": "<string>",
"dependentJobs": [
"<string>"
]
}
],
"nextPageToken": "<string>",
"totalSize": 123
}
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Path Parameters
The Account Id
Query Parameters
The maximum number of fine-tuning jobs to return. The maximum page_size is 200, values above 200 will be coerced to 200. If unspecified, the default is 50.
A page token, received from a previous ListFineTuningJobs call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to ListFineTuningJobs must match the call that provided the page token.
Only model satisfying the provided filter (if specified) will be returned. See https://google.aip.dev/160 for the filter grammar.
A comma-separated list of fields to order by. e.g. "foo,bar" The default sort order is ascending. To specify a descending order for a field, append a " desc" suffix. e.g. "foo desc,bar" Subfields are specified with a "." character. e.g. "foo.bar" If not specified, the default order is by "name".
Response
- CREATING: The fine-tuning job is being created.
- PENDING: The fine-tuning job scheduled and is waiting for resource allocation.
- RUNNING: The fine-tuning job is running.
- COMPLETED: The fine-tuning job has finished successfully.
- FAILED: The fine-tuning job has failed.
- DELETING: The fine-tuning job is being deleted
STATE_UNSPECIFIED
, CREATING
, PENDING
, RUNNING
, COMPLETED
, FAILED
, DELETING
The name of the dataset used for training. A dataset ID may also be supplied, in which case the ID will be normalized into the fully qualified dataset name using the parent of the job.
The list of datasets to be used for training. The dataset IDs may also be supplied, in which case the ID will be normalized into the fully qualified dataset name using the parent of the job.
The status code.
OK
, CANCELLED
, UNKNOWN
, INVALID_ARGUMENT
, DEADLINE_EXCEEDED
, NOT_FOUND
, ALREADY_EXISTS
, PERMISSION_DENIED
, UNAUTHENTICATED
, RESOURCE_EXHAUSTED
, FAILED_PRECONDITION
, ABORTED
, OUT_OF_RANGE
, UNIMPLEMENTED
, INTERNAL
, UNAVAILABLE
, DATA_LOSS
A developer-facing error message in English.
The email address of the user who created this fine tuning job.
the model id to generate for training jobs.
If not specified, default to the base model conversation_config.template, if exists.
the model deployment where we generate responses from.
If not specified, default to the base model conversation_config.template, if exists.
the boolean flag to cleanup the deployment after the data generation job is done.
The name of the base model.
peft addon model in fireworks format to warm start a fine-tuning job from.
The number of epochs to train for.
The learning rate used for training.
The lora rank used for training.
The lora target modules used for training.
The batch size of dataset used for training.
The batch size of dataset used for training per accelerator.
The token to mask out prompts, used by draft model data generation.
The token for padding, used by draft model data generation.
The Weights & Biases url to see training progress.
The Weights & Biases entity where training progress should be reported. If unspecified, then progress will not be reported to W&B.
The Weights & Biases API key associated with the entity. Required if and only if wandb_entity is specified.
The Weights & Biases project where training progress should be reported. Required if and only if wandb_entity is specified.
the split flag to take from training dataset for evaluation.
the dataset to use for evaluation.
A token, which can be sent as page_token
to retrieve the next page.
If this field is omitted, there are no subsequent pages.
Was this page helpful?
curl --request GET \
--url https://api.fireworks.ai/v1/accounts/{account_id}/fineTuningJobs \
--header 'Authorization: Bearer <token>'
{
"fineTuningJobs": [
{
"name": "<string>",
"displayName": "<string>",
"createTime": "2023-11-07T05:31:56Z",
"state": "STATE_UNSPECIFIED",
"dataset": "<string>",
"datasets": [
{
"dataset": "<string>"
}
],
"status": {
"code": "OK",
"message": "<string>"
},
"createdBy": "<string>",
"containerVersion": "<string>",
"modelId": "<string>",
"legacyJob": {},
"textCompletion": {
"inputTemplate": "<string>",
"outputTemplate": "<string>"
},
"textClassification": {
"text": "<string>",
"label": "<string>"
},
"conversation": {
"jinjaTemplate": "<string>"
},
"draftModelData": {
"deploymentName": "<string>",
"jinjaTemplate": "<string>",
"cleanupDeployment": true
},
"draftModel": {},
"genie": {
"pipelineName": "<string>"
},
"baseModel": "<string>",
"warmStartFrom": "<string>",
"epochs": 123,
"learningRate": 123,
"loraRank": 123,
"loraTargetModules": [
"<string>"
],
"batchSize": 123,
"microBatchSize": 123,
"maskToken": "<string>",
"padToken": "<string>",
"wandbUrl": "<string>",
"wandbEntity": "<string>",
"wandbApiKey": "<string>",
"wandbProject": "<string>",
"evaluation": true,
"evaluationSplit": 123,
"evaluationDataset": "<string>",
"dependentJobs": [
"<string>"
]
}
],
"nextPageToken": "<string>",
"totalSize": 123
}