Fine-tuning models
Llama 3.2 1B Instruct, Llama 3.2 3B Instruct, Llama 3.1 8B Instruct and Llama 3.1 70B Instruct are now supported!
We utilize LoRA (Low-Rank Adaptation) for efficient and effective fine-tuning of large language models. LoRA is used for fine-tuning all models besides our 70B models, which uses qLoRA (quantized) to improve training speeds. Take advantage of this opportunity to enhance your models with our cutting-edge technology!
Introduction
Fine-tuning a model with a dataset can be useful for several reasons:
- Enhanced Precision: It allows the model to adapt to the unique attributes and trends within the dataset, leading to significantly improved precision and effectiveness.
- Domain Adaptation: While many models are developed with general data, fine-tuning them with specialized, domain-specific datasets ensures they are finely attuned to the specific requirements of that field.
- Bias Reduction: General models may carry inherent biases. Fine-tuning with a well-curated, diverse dataset aids in reducing these biases, fostering fairer and more balanced outcomes.
- Contemporary Relevance: Information evolves rapidly, and fine-tuning with the latest data keeps the model current and relevant.
- Customization for Specific Applications: This process allows for the tailoring of the model to meet unique objectives and needs, an aspect not achievable with standard models.
In essence, fine-tuning a model with a specific dataset is a pivotal step in ensuring its enhanced accuracy, relevance, and suitability for specific applications. Let’s hop on a journey of fine-tuning a model!
Fine-tuned model inference on Serverless is slower than base model inference on Serverless. For use cases that need low latency, we recommend using on-demand deployments. For on-demand deployements, fine-tuned model inference speeds are significant closer to base model speeds (but still slightly slower). If you are only using 1 LoRA on-demand, merging fine-tuned weights into the base model when using on-demand deployments will provide identical speed to base model inference. If you have an enterprise use case that needs fast fine-tuned models, please contact us!
Installing firectl
firectl
is the command-line (CLI) utiliy to manage, and deploy various resources on the Fireworks AI Platform. Use firectl
to manage fine-tuning jobs and their resulting models.
Please visit the Firectl Getting Started Guide on installing and using firectl
.
Preparing your dataset
To fine-tune a model, we need to first upload a dataset. Once uploaded, this dataset can be used to create one or more fine-tuning jobs. A dataset consists of a single JSONL file, where each line is a separate training example.
Limits:
- Minimum number of examples is 1.
- Maximum number of examples is 3,000,000.
Format:
- Each line of the file must be a valid JSON object.
For the rest of this tutorial, we will use the databricks/databricks-dolly-15k dataset as an example. Each record in this dataset consists of a category
, instruction
, an optional context
, and the expected response
. Here are a few sample records:
To create a dataset, run:
and you can check the dataset with:
To use an existing Hugging Face dataset, please refer to the script below for conversion. Datasets are private and cannot be viewed by other accounts.
Starting your tuning job
Fireworks supports three types of fine-tuning depending on the modeling objective:
- Text completion - used to train a text generation model
- Text classification - used to train a text classification model
- Conversation - used to train a chat/conversation model
There are two ways to specify settings for your tuning job. You can create a settings YAML file and/or specify them using command-line flags. If a setting is present in both, the command-line flag takes precedence.
To start a job, run:
firectl will return the fine-tuning job ID.
Starting from a base model or a PEFT addon model
When creating a fine-tuning job, you can start tuning from a base model, or from a model you tuned earlier (PEFT addon):
- Base model: Use the
base_model
parameter to start from a pre-trained base model. - PEFT addon model: Use the
warm_start_from
parameter to start from an existing PEFT addon model.
You must specify either base_model
or warm_start_from
in your settings file or command-line flags.
The following sections provide examples of a settings file for the given tasks.
Text completion
To train a text completion model, you need to define an input template and output template from your JSON fields. To directly use a field as inputs or outputs, simply set the input and output templates as the field names.
You can also add additional text to the input and output templates. For example, this example demonstrates training on context
, instruction
and the response
fields with added text around the fields. We won’t use the category
field at all.
Conversation
To train a conversation model, the dataset must conform to the schema expected by the Chat Completions API. Each JSON object of the dataset must contain a single array field called messages
. Each message is an object containing two fields:
role
- one of “system”, “user”, or “assistant”.content
- the content of the message.
A message with the “system” role is optional, but if specified, must be the first message of the conversation. Subsequent messages start with “user” and alternate between “user” and “assistant”. For example:
The settings file for tuning a conversation model looks like:
Or, you can optionally pass in a Jinja template to digest the messages, settings file look like:
an example of template string will look like:
Notice: To use conversation settings, default polished Jinja templates will be provided for models that are recommended for chat tuning to guarantee the quality, see the specs in the conversation recommended column at the model spec section. Otherwise, we will still provide a default generic template if no is template provided to overwrite, but the tuned model quality might not be optimal.
Text classification
In this example, we’ll only be training on the instruction
and the category
field. We won’t use the context
and response
field at all
Checking the job status
You can monitor the progress of the tuning job by running
Once the job successfully completes, a model will be created in your account. You can see a list of models by running:
Or if you specified a model ID when creating the fine-tuning job, you can get the model directly:
Deploying and using a model
Before using your fine-tuned model for inference, you must deploy it. Please refer to our guides on Deploying a model and Querying text models for detailed instructions.
Some base models may not support serverless addons. To check:
- Run
firectl -a fireworks get <base-model-id>
- Look under
Deployed Model Refs
to see if afireworks
-owned deployment exists, e.g.accounts/fireworks/deployments/3c7a68b0
- If so, then it is supported
If the base model doesn’t support serverless addons, you will need use an on-demand deployment to deploy it.
Additional tuning options
Evaluation
By default, the fine-tuning job will not run any post-training evaluation. If enabled:
- For classification tasks, we measure the number of examples that match the expected label.
- For these conversation and text completion tasks, we use perplexity to measure how well the model generates responses.
You can enable model evaluation by specifying one of two options:
evaluation_split
: The percentage of the dataset to use for evaluation.
Sample usage:
evaluation_dataset
: The ID of a separate dataset to use for evaluation.
Epochs
Epochs is the number of epochs (i.e. passes over the training data) the job should train for. Non-integer values are supported. If not specified, a reasonable default number will be chosen for you.
notice: we have the max value of 3 millions of dataset examples * epochs
Learning rate
The learning rate used in training can be configured. If not specified, a reasonable default value will be chosen.
Warmup Steps
The number of steps to warm up the learning rate. If not specified, a reasonable default value will be chosen.
LrScheduler Type(Enterprise accounts only)
The learning rate scheduler type can be configured. If not specified, a reasonable default value will be chosen.
Supported values: linear
, cosine
, cosine_with_restarts
, polynomial
, constant
, constant_with_warmup
Batch size
The batch size of dataset used in training can be configured with a positive integer less than 1024 and in power of 2. If not specified, a reasonable default value will be chosen.
Micro Batch Size(Enterprise accounts only)
Micro batch size is the number of examples to process in each GPU instance. If not specified, a reasonable default value will be chosen.
Lora Rank
LoRA rank refers to the dimensionality of trainable matrices in Low-Rank Adaptation fine-tuning, balancing model adaptability and computational efficiency in fine-tuning large language models. The LoRA rank used in training can be configured with a positive integer with a max value of 32. If not specified, a reasonable default value will be chosen.
Lora Alpha(Enterprise accounts only)
The LoRA alpha parameter controls the effective learning rate of the LoRA updates by scaling the trainable matrices during fine-tuning. A higher alpha value increases the impact of the LoRA updates, while a lower value makes the updates more conservative. If not specified, the system will use an optimized default value.
Lora Target Modules(Enterprise accounts only)
The LoRA target modules parameter specifies the layers of the model to apply LoRA to. If not specified, the system will use an optimized default value.
Training progress and monitoring
The fine-tuning service integrates with Weights & Biases to provide observability into the tuning process. To use this feature, you must have a Weights & Biases account and have provisioned an API key.
Model ID
By default, the fine-tuning job will generate a random unique ID for the model. This ID is used to refer to the model at inference time. You can optionally specify a custom ID, within (ID constraints)[https://docs.fireworks.ai/getting-started/concepts#resource-names-and-ids].
Job ID
By default, the fine-tuning job will generate a random unique ID for the fine-tuning job. You can optionally choose a custom ID.
Downloading model weights
We are opening model weights download to everyone now! simply following the command below
Appendix
Supported base models
The following base models are supported for parameter-efficient fine-tuning (PEFT) and can be deployed as PEFT add-ons on Fireworks serverless and on-demand deployments, using the default parameters below. Serverless deployment is only available for a subset of fine-tuned models - run “get (<model id>)[https://docs.fireworks.ai/models/overview#introduction]” or check the models (page)[https://fireworks.ai/models] to see if there’s an active serverless deployment.
The cut-off length is the maximum limit on the sum of input tokens and generated output tokens.
Hugging Face dataset to JSONL
To convert a Hugging Face dataset to the JSONL format supported by our fine-tuning service, you can use the following Python script:
Support
We’d love to hear what you think! Please connect with the team, ask questions, and share your feedback in the #fine-tuning Discord channel.
Pricing
We charge based on the total number of tokens processed (dataset tokens * number of epochs). Please see our Pricing page for details.
Was this page helpful?