Fine-tuning
Why can’t I deploy my fine-tuned Llama 3.1 LoRA adapter?
If you encounter the following error:
This issue is due to the fireworks.json
file being set to Llama 3.1 70b instruct by default.
Workaround:
- Download the model weights.
- Modify the base model to be
accounts/fireworks/models/llama-v3p1-8b-instruct
. - Follow the instructions in the documentation to upload and deploy the model.