Fireworks AI Docs home page
Search...
⌘K
Ask AI
Community
Status
Dashboard
Dashboard
Search...
Navigation
Document Inlining (Deprecated)
Documentation
SDKs
CLI
API Reference
Model Library
FAQ
Changelog
Guides
Get Started
Introduction
Quickstart
Recommended open models
OpenAI compatibility
Concepts
Querying models
Introduction
Querying text models
Querying transcription models
Querying vision-language models
Querying embedding models
Responses API
Using function-calling
Using JSON mode
Using grammar mode
Using predicted outputs
Voice agent platform
Troubleshooting inference errors
Dedicated Deployments
On-demand deployments
Uploading a custom base model
Quantization
Regions
Reserved capacity
Direct routing
Client-side performance optimization
Fine-tuning
Introduction to fine-tuning
Supervised fine-tuning for text (SFT)
Supervised fine-tuning for VLMs (SFT)
Reinforcement fine-tuning (RFT)
Single-LoRA deployment with live merge
Using multi-LoRA
Importing fine-tuned models
External GCS Bucket Integration
Evaluators (RewardKit)
Integrations
Hugging Face
Policies
Rate limits, spend limits and quotas
Prompt caching
Data privacy & security
Administration
Managing users
Custom SSO
Service Accounts
On this page
Overview
Migration recommendations
Document Inlining (Deprecated)
Copy page
Document Inlining has been deprecated
and is no longer available. This feature has been removed from the Fireworks platform.
Overview
Document Inlining was a feature that allowed LLMs to process images and PDFs through our chat completions API by appending
#transform=inline
to document URLs. This feature has been deprecated and is no longer supported.
Migration recommendations
For image processing: Use Vision Language Models (VLMs) like
Qwen2.5-VL 32B Instruct
For PDF processing: Use dedicated PDF processing libraries combined with text-based LLMs
For structured extraction: Leverage our
structured responses
capabilities
For assistance with migration, please contact our support team or visit our
Discord community
.
Was this page helpful?
Yes
No
Assistant
Responses are generated using AI and may contain mistakes.