Skip to main content

Training Shapes

In practice, a training shape is the user-facing launch input for trainer jobs. Most users only need to choose a training shape ID such as accounts/fireworks/trainingShapes/qwen3-8b-128k-h200 and pass it to the SDK.
A training shape is required to launch a trainer job. In most cases, you should pass the full shared shape path accounts/fireworks/trainingShapes/<shape>, and the SDK resolves the pinned version for you.
The fireworks account is the shared public shape catalog. Shapes published under accounts/fireworks/trainingShapes/<shape> can be referenced by all users. You do not need to know the versioned shape reference, image tag, GPU layout, or linked deployment shape ahead of time. The SDK resolves those details internally.

What You Need To Know

For most users, the workflow is:
  1. Pick a training shape ID from the available shapes list below. In most cases this should be the full shared path accounts/fireworks/trainingShapes/<shape>.
  2. Call resolve_training_profile(shape_id).
  3. Pass profile.training_shape_version into TrainerJobConfig.training_shape_ref.
That is the only shape-specific value you choose yourself.

What A Training Shape Controls

When you specify a training shape, it provides the trainer with:
  • GPU and node layout: acceleratorType, acceleratorCount, nodeCount
  • Model limits: maxSupportedContextLength
  • Trainer runtime: trainerImageTag
  • Linked serving setup: deploymentShapeVersion

What You Can And Can’t Change

You can still configure normal training-loop fields such as:
  • base_model
  • lora_rank
  • learning_rate
  • gradient_accumulation_steps
  • display_name
  • hot_load_deployment_id
  • Deployment replica counts (min_replica_count / max_replica_count)
Shape-owned infra is locked. Do not try to override accelerator_type, accelerator_count, node_count, custom_image_tag, or the linked deployment shape.
For field-level behavior and dataclass details, see the SDK reference for TrainerJobManager and DeploymentManager.

Using a Training Shape

The only shape-specific input you provide is the shape ID:
  1. You provide the shape ID (e.g. accounts/fireworks/trainingShapes/qwen3-8b-128k-h200) — no version needed.
  2. The SDK resolves the latest validated version via resolve_training_profile().
  3. You pass the resolved version to TrainerJobConfig.
You then pass the resolved version to TrainerJobConfig:
from fireworks.training.sdk import TrainerJobManager, TrainerJobConfig

mgr = TrainerJobManager(api_key=api_key, account_id=account_id)
shape_id = "accounts/fireworks/trainingShapes/qwen3-8b-128k-h200"

# This is the only shape-specific value you choose
profile = mgr.resolve_training_profile(shape_id)
# profile.training_shape_version → "accounts/fireworks/trainingShapes/qwen3-8b-128k-h200/versions/s0q58a4p"

# Pass the resolved version to the trainer config
config = TrainerJobConfig(
    base_model="accounts/fireworks/models/qwen3-8b",
    training_shape_ref=profile.training_shape_version,
)
endpoint = mgr.create_and_wait(config)
Use the full training shape ID including the account prefix (for example accounts/fireworks/trainingShapes/qwen3-8b-128k-h200). The fireworks account is the shared public account for training shapes, and you do not need to hand-write a versioned training_shape_ref yourself.

Available Training Shapes

Below is a list of the current platform training shapes available under the shared public fireworks account. During Reinforcement Fine-Tuning (RFT), two types of models are often deployed: a policy trainer (which updates its weights) and a reference model (which is forward-only).
  • Policy Trainer Shapes: These shapes are used for standard Supervised Fine-Tuning (SFT) or as the active policy model during Reinforcement Learning (RL).
  • Forward-Only / Reference Shapes: These shapes are used for reference models in RL pipelines. They do not require optimizer states or backward passes, and thus often require fewer resources.

Qwen3 (Dense)

Qwen3 4B

Model: accounts/fireworks/models/qwen3-4b
  • Policy trainer: (65k, 1x H200)
  • Forward-only / reference: (65k, 1x H200)

Qwen3 8B

Model: accounts/fireworks/models/qwen3-8b
  • Policy trainer: (128k, 4x H200)
  • Forward-only / reference: (128k, 4x H200)

Qwen3 32B

Model: accounts/fireworks/models/qwen3-32b
  • Policy trainer: (65k, 8x B200)
  • Forward-only / reference: (65k, 4x B200)

Qwen3 (Mixture-of-Experts)

Qwen3 30B A3B

Model: accounts/fireworks/models/qwen3-30b-a3b-instruct-2507
  • Policy trainer: (131k, 8x)
  • Forward-only / reference: (131k, 4x)

Qwen3 235B

Model: accounts/fireworks/models/qwen3-235b-a22b-instruct-2507
  • Policy trainer: (128k, 8x B200)
  • Forward-only / reference: (128k, 8x B200)

Qwen3 VL

Qwen3 VL 8B

Model: accounts/fireworks/models/qwen3-vl-8b-instruct
  • Policy trainer: (65k, 4x H200)
  • Forward-only / reference: None

Llama 3

Llama 70B

Model: accounts/fireworks/models/llama-v3p3-70b-instruct
  • Policy trainer: (128k, 8x B200)
  • Forward-only / reference: (128k, 4x B200)

Kimi

Kimi 2.5 Text-Only

Model: accounts/fireworks/models/kimi-k2p5
  • Policy trainer (text only): (256k, 8x B200)
  • Forward-only (text only) / reference: (256k, 8x B200)
  • Lora trainer (text only): (80k, 8x B300)