Skip to main content

What this is

Implement your objective in Python and run it through forward_backward_custom so algorithm design remains in your control.

How to use these APIs

  • TrainingClient.forward_backward_custom: Run objective and backpropagation for each batch.
  • TrainingClient.optim_step: Apply optimizer update.

End-to-end examples

Custom objective step

def objective(data, logprobs_list):
    loss = compute_custom_loss(data=data, logprobs_list=logprobs_list)
    return loss, {"loss": float(loss.item())}

training_client.forward_backward_custom(batch, objective).result()
training_client.optim_step(
    tinker.AdamParams(learning_rate=1e-5, beta1=0.9, beta2=0.999, eps=1e-8, weight_decay=0.01)
).result()

Common pitfalls

  • Token-weight misalignment can silently break objective semantics.
  • Ignoring per-step diagnostics makes instability hard to attribute.