Skip to main content

What this is

Managed SFT jobs are the shortest path for supervised adaptation when you do not need a custom per-step objective loop.

Workflow

  1. Create/upload dataset and validate readiness.
  2. Launch supervised fine-tuning job with training configuration.
  3. Monitor job until terminal state and hand off to deployment.

End-to-end examples

Create dataset and upload training data

dataset = fw.datasets.create(dataset_id="sft-dataset", dataset={"exampleCount": "12000"})
fw.datasets.upload(dataset_id="sft-dataset", file="/path/to/sft_data.jsonl")
fw.datasets.validate_upload(dataset_id="sft-dataset", body={})

Launch and monitor SFT job

job = fw.supervised_fine_tuning_jobs.create(
    dataset_id="sft-dataset",
    training_config={
        "base_model": "accounts/fireworks/models/qwen3-8b",
        "max_context_length": 4096,
        "learning_rate": 2e-5,
    },
)
job_id = job.name.split("/")[-1]
while True:
    state = str(fw.supervised_fine_tuning_jobs.get(supervised_fine_tuning_job_id=job_id).state)
    if state in {"COMPLETED", "FAILED", "CANCELLED"}:
        break
    time.sleep(15)
if state != "COMPLETED":
    raise RuntimeError(f"SFT job failed with state={state}")

Deploy resulting model

fw.deployments.create(
    deployment_id="sft-serving",
    base_model="accounts/<ACCOUNT_ID>/models/<TRAINED_MODEL_ID>",
    min_replica_count=0,
    max_replica_count=1,
)

Operational guidance

  • SFT managed jobs optimize a supervised objective without requiring custom per-step loss code.
  • Use a fixed held-out set and evaluate before promoting a trained model to production.
  • If you need custom objective functions, move to the service-mode Training SDK loop paths instead of managed SFT jobs. Service-mode trainer jobs currently support full-parameter tuning only (lora_rank=0).