Skip to main content

LLM model resets to default on each new subtask

Symptoms:

When starting a task in Reasoning with a chosen Large Language Model (LLM), the model selection may revert to the default when a new subtask is initiated. In such cases, the chosen model does not persist for the entire duration of the task.

Cause:

This is a known issue where the model selection state is not correctly maintained across subtasks. The system defaults back to the primary model instead of retaining the user's choice.

While users can manually switch models at any point, changing models mid-task can introduce complications. If the newly selected model has a smaller context window (i.e., supports fewer tokens) than the previous one, it may lead to errors or loss of context, as the model will be unable to process the entire history of the task.

Recommendations:

  • Avoid Switching Models Mid-Task: For the most stable experience, it is recommended to select a model at the beginning of a task and stick with it until completion.

  • Check Token Limits: If you must switch models, be aware of the token capacity of both the current and the new model. Opt for a model with an equal or larger context window to prevent potential issues.

  • Revert if Necessary: If you encounter errors after switching to a new model, the best course of action is to switch back to the original model or another one that supports a sufficient number of tokens for the task's context.

Status:

Our engineering team is aware of the primary issue of model selection not persisting and is working on a fix. Future updates will ensure that the chosen LLM remains active throughout the entire task lifecycle, including all subtasks.