Access issues

Q: Why am I getting “Model not found” errors when trying to access my fine-tuned model?

If you’re unable to access your fine-tuned model, try these troubleshooting steps:

First steps:

  • Attempt to access the model through both the playground and the API.
  • Check if the error occurs for all users on the account.
  • Ensure your API key is valid.

Common causes:

  • User email previously associated with a deleted account
  • API key permissions issues
  • Access conflicts due to multiple accounts

Debug process:

  1. Verify the API key’s validity using:
    curl -v -H "Authorization: Bearer $FIREWORKS_API_KEY" https://api.fireworks.ai/verifyApiKey
    
  2. Check if the issue persists across different API keys.
  3. Identify which specific users/emails are affected.

Getting help:

  • Contact support with:
    • Your account ID
    • API key verification results
    • A list of affected users/emails
    • Results from both playground and API tests

Note: If you have multiple accounts, ensure that access permissions are checked across all of them.


Troubleshooting firectl deployment

Q: Why am I getting “invalid id” errors when using firectl commands like create deployment or list deployments?

This error typically occurs when your account ID is not properly configured.

Common symptoms

  • Error message: invalid id: id must be at least 1 character long
  • Affects multiple commands, including:
    • firectl create deployment
    • firectl list deployments

To resolve:

Steps to resolve

  1. Run firectl whoami to check which account id is being used.
  2. Ensure the correct account ID is being used. If not, run firectl signin to sign-in to the right account.

LoRA deployment issues

Q: Why can’t I deploy my fine-tuned Llama 3.1 LoRA adapter?

If you encounter the following error:

Invalid LoRA weight model.layers.0.self_attn.q_proj.lora_A.weight shape: torch.Size([16, 4096]), expected (16, 8192)

This issue is due to the fireworks.json file being set to Llama 3.1 70b instruct by default.

Workaround:

  1. Download the model weights.
  2. Modify the base model to be accounts/fireworks/models/llama-v3p1-8b-instruct.
  3. Follow the instructions in the documentation to upload and deploy the model.

Additional information

If you experience any issues during these processes, you can: