Overview
DeploymentManager manages the lifecycle of inference deployments that serve as sampling and weight sync targets during training. For on-policy training (GRPO), the deployment is hotloaded with the latest policy weights.
Constructor
DeploymentManager supports separate URLs for control-plane, inference, and hotload traffic:
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | — | Fireworks API key |
account_id | str | — | Fireworks account ID |
base_url | str | "https://api.fireworks.ai" | Control-plane URL for deployment CRUD |
inference_url | str | None | None | Gateway URL for inference completions (defaults to base_url) |
hotload_api_url | str | None | None | Gateway URL for hotload operations (defaults to base_url) |
additional_headers | dict | None | None | Extra HTTP headers |
verify_ssl | bool | None | None | SSL verification override |
base_url. Separate URLs are useful when the control-plane and gateway have different endpoints (e.g. personal dev gateways).
Methods
create_or_get(config, force_recreate=False)
Create a new deployment or retrieve an existing one. Set force_recreate=True to delete and recreate if it already exists:
DeploymentInfo.
wait_for_ready(deployment_id, timeout_s=600, poll_interval_s=15)
Poll until the deployment is ready to serve:
DeploymentInfo.
get(deployment_id)
Inspect deployment status. Returns a DeploymentInfo or None if not found:
hotload_and_wait(deployment_id, base_model, snapshot_identity, ...)
Load a checkpoint onto the deployment and wait for completion:
incremental_snapshot_metadata:
warmup(model)
Send a warmup request to the deployment after weight sync.
scale_to_zero(deployment_id)
Release GPU resources without deleting the deployment:
minReplicaCount and maxReplicaCount to 0.
delete(deployment_id)
Delete a deployment entirely:
DeploymentConfig
DeploymentManager.create_or_get(...) accepts a DeploymentConfig dataclass:
When deployment_shape is set, treat the shape as the source of truth for deployment hardware and serving configuration. In normal user-facing flows, you should not try to override shape-owned hardware fields separately.
| Field | Type | Default | Description |
|---|---|---|---|
deployment_id | str | — | Stable deployment identifier |
base_model | str | — | Base model name. Must match the trainer’s base model for weight sync compatibility. |
deployment_shape | str | None | None | Deployment shape resource name. In normal shape-based flows, this owns the deployment’s hardware and serving config. |
region | str | None | None | Region for the deployment. Leave unset when the deployment shape already determines placement. |
min_replica_count | int | 0 | Minimum replicas (set 0 to scale to zero when idle) |
max_replica_count | int | 1 | Maximum replicas for autoscaling |
accelerator_type | str | "NVIDIA_H200_141GB" | Accelerator type. In normal shape-based flows, leave this unset and let deployment_shape own the hardware choice. |
hot_load_bucket_type | str | None | "FW_HOSTED" | Weight sync storage backend |
disable_speculative_decoding | bool | False | Disable speculative decoding |
extra_args | list[str] | None | None | Extra serving arguments |
extra_values | dict | None | None | Extra deployment values |
DeploymentInfo
Returned bycreate_or_get, wait_for_ready, and get:
| Field | Type | Description |
|---|---|---|
deployment_id | str | Deployment identifier |
name | str | Full resource name |
state | str | Deployment state (e.g. "READY", "CREATING") |
hot_load_bucket_url | str | None | URL for weight sync storage |
inference_model | str | None | Model string for completions API (accounts/{account}/deployments/{id}) |
Deployment shape and training shapes
When using a training shape, the linked deployment shape is determined by the training shape and cannot be changed. The training shape’sdeploymentShapeVersion locks the GPU type, node count, and serving engine configuration for the inference deployment.
The one thing you can adjust is the replica count. Use min_replica_count and max_replica_count to scale up throughput for sampling during RL loops:
Operational guidance
- Keep deployment IDs stable per experiment family for easier rollbacks.
- Use
min_replica_count=0for development to avoid idle GPU costs. - Create the deployment before the trainer so the trainer can be linked at creation time via
hot_load_deployment_id. - Use
deployment_shapewhen the control plane has a pre-validated shape for your model. - Do not treat shape-owned hardware as a user-facing override surface — in normal flows, leave
accelerator_typeand placement decisions to the deployment shape and only tune replica counts. - Use
scale_to_zeroafter training as a lighter alternative todelete.
Related guides
- DeploymentSampler — sample from the deployment
- WeightSyncer — automated checkpoint + weight sync lifecycle
- Cleanup — resource cleanup