Fine-tuning services
Fine-tuning service
Overview of Fireworks.ai fine-tuning capabilities and supported models.
Service availability
Q: Does Fireworks offer a fine-tuning service?
Yes, Fireworks offers a fine-tuning service. Take a look at our fine-tuning guide, which is also available via REST API for detailed information about our services and capabilities.
Model support
Q: What models are supported for fine-tuning? Is Llama 3 supported for fine-tuning?
Yes, Llama 3 (8B and 70B) is supported for fine-tuning with LoRA adapters, which can be deployed via our serverless and on-demand options for inference.
Capabilities include:
- LoRA adapter training for flexible model adjustments
- Serverless deployment support for scalable, cost-effective usage
- On-demand deployment options for high-performance inference
- A variety of base model options to suit different use cases
For a complete list of models available for fine-tuning, refer to our documentation.
Additional information
If you experience any issues during these processes, you can:
- Contact support through Discord at discord.gg/fireworks-ai
- Reach out to your account representative (Enterprise customers)
- Email inquiries@fireworks.ai
Was this page helpful?