Guides
Get Started
Introduction
Quickstart
Recommended open models
OpenAI compatibility
Concepts
Querying models
Introduction
Querying text models
Querying transcription models
Querying vision-language models
Querying embedding models
Responses API
Batch Inference
Using function-calling
Using JSON mode
Using grammar mode
Using predicted outputs
Voice agent platform
Troubleshooting inference errors
Dedicated Deployments
On-demand deployments
Uploading a custom base model
Quantization
Regions
Reserved capacity
Direct routing
Client-side performance optimization
Exporting Metrics
Fine-tuning
Introduction to fine-tuning
Supervised fine-tuning for text (SFT)
Supervised fine-tuning for VLMs (SFT)
Direct Preference Optimization (DPO) on Fireworks AI
Reinforcement fine-tuning (RFT)
Single-LoRA deployment with live merge
Using multi-LoRA
Importing fine-tuned models
External GCS Bucket Integration
Evaluators (RewardKit)
Integrations
Hugging Face
Amazon SageMaker
Policies
Rate limits, spend limits and quotas
Prompt caching
Data privacy & security
Administration
Managing users
Custom SSO
Service Accounts
Fireworks AI Docs home page
Search...
⌘K
Community
Status
Dashboard
Dashboard
Search...
Navigation
Page Not Found
Documentation
Examples
SDKs
CLI
API Reference
Model Library
Demos
FAQ
Changelog
404
Page Not Found
We couldn't find the page you were looking for. Maybe you were looking for?
Does Fireworks support custom base models?
There’s a model I would like to use that isn’t available on Fireworks. Can I request it?
Uploading a custom base model
Assistant
Responses are generated using AI and may contain mistakes.