Creates a fine-tuning job on Fireworks AI platform with the provided configuration yaml.

firectl create sftj [flags]

Example

firectl create sftj \
--base-model llama-v3p1-8b-instruct \
--dataset cancerset \
--output-model my-tuned-model \
--job-id my-fine-tuning-job \
--learning-rate 0.0001 \
--epochs 2 \
--early-stop \
--evaluation-dataset my-eval-set

Flags

      --base-model string              (required) The base model used for fine-tuning. e.g. mistralai/Mixtral-8x7B-Instruct-v0.1
      --dataset string                 (required) The ID of the dataset for the fine tuning.

      --display-name string            (optional) The display name of the fine-tuning job.
      --draft-base-model string        (optional) The draft model hf base model field.
      --epochs int                   (optional) The number of epochs to train for.
      --evaluation-dataset string   (optional) The evaluation dataset for the supervised fine-tuning job.
      --job-id string                  (optional) The ID of the fine-tuning job.
      --learning-rate float            (optional) The learning rate used for training.
      --lora-rank int32                (optional) The LoRA rank used for training.
      --early-stop                  Enable early stopping for the supervised fine-tuning job.

      --quiet                       If set, only errors will be printed.
  -h, --help                           help for deployment

      --wandb-api-key string           (optional) A Weights & Biases API key associated with the entity.
      --wandb-entity string            (optional) The Weights & Biases entity where training progress should be reported.
      --wandb-project string           (optional) The Weights & Biases project where training progress should be reported.
      --wandb-run-id string         [WANDB_RUN_ID] WandB Run ID. Implies --wandb.
      --wandb                       Enable WandB