# Release server/v2.2.1
See CLI 2.2.1 release notes.
AI coding agent for large projects
latest
server/v2.2.1
total
30
stable
30
scraped
Mar 2, 2026
See CLI 2.2.1 release notes.
If you have a Claude Pro or Max subscription, Plandex can use it when calling Anthropic models. You can use it in either Integrated Models Mode on Plandex Cloud, or in BYO Key Mode (whether on Cloud or self-hosting).
Assuming you're using Anthropic models (which the default model pack does), you'll be asked if you want to connect your Claude subscription the first time you run Plandex. Follow the instructions to connect.
Learn more in the docs.Fixed an issue with custom models and providers.
See CLI 2.2.0 release notes.
This is a big release that is mainly focused on Plandex's model provider and model config system. It significantly increases model provider flexibility, makes custom model configuration much easier, reduces costs on Cloud, and adds built-in support for Ollama.
set-model and set-model default commandsset-model has been simplified to work with the new system. If run without arguments, you'll be prompted to either select a built-in or custom model pack, or to directly edit the current plan's model config inline in JSON. You can also pass it a model pack name (set-model daily-driver) or jump straight to the JSON settings with set-model --json.set-model default works the same way, but allows you to configure the default model settings for new plans.models custom commandmodels custom is a new all-in-one command for managing custom providers, models, and model packs in one place. It replaces the models add, models delete, model-packs create, model-packs update, and model-packs delete commands.models and models default commandsmodels and models default commands now show simplified output by default, with a new --all flag to show all properties.mistral/devstral-small, with both OpenRouter and Ollama providers.cloud variants for OpenRouter and local variants for Ollama.deepseek/r1, from 8b to 70b, available with Ollama provider.gemini-exp model pack has been removed, and in its place there's now a new gemini-planner model pack, which uses Gemini 2.5 Pro for planning and context selection, and the default models for other roles, as well as a new google model pack, which uses either Gemini 2.5 Pro or Gemini 2.5 Flash for all roles.o3-planner model pack has been added, which uses OpenAI o3-medium for planning and context selection, and the default models for other roles.gemini-preview model pack has been removed, and a new gemini-planner model pack has been added, which uses Gemini 2.5 Pro for planning and context selection, and the default models for other roles.deepseek/r1 model has been updated to use the latest model identifier (deepseek/deepseek-r1-0528) on OpenRouter.export const foo = 'bar' to be omitted from map files. Also improved TypeScript mapping support for some other constructs like declare global, namespace, and enum blocks, and improved handling of arrow functions. Thanks to @mnahkies for the PR identifying this.plandex checkout now has a --yes/-y flag to auto-confirm creating a new branch if it doesn't exist, so the command can be used for scripting with no user interaction.plandex tell, plandex continue, and plandex build all now support a --skip-menu flag to skip the interactive menu that appears when the response finishes and changes are pending. There's also a new skip-changes-menu config setting that can be set to true to skip this menu by default.See CLI 2.1.6+1 release notes.
See CLI 2.1.6 release notes.
daily-driver model pack that weren't correctly updated to Sonnet 4 in 2.1.6strong-opus model pack is now available. It uses Claude Opus 4 for planning and coding, and is otherwise the same as the 'strong' pack. Use it with \set-model strong-opus to try it out.opus-4-planner model pack that was introduced in 2.1.5 has been renamed to opus-planner, but the old name is still supported. This model pack uses Claude Opus 4 for planning, and the default models for other roles.See CLI 2.1.5 release notes.
See CLI 2.1.1 release notes.
OPENROUTER_API_KEY set. A separate OpenAI account is no longer required.OPENAI_API_KEY environment variable in addition to OPENROUTER_API_KEY. This will cause OpenAI models to make direct calls to OpenAI, which is slightly faster and cheaper.gemini-preview model pack has been added, which uses Gemini 2.5 Pro Preview for planning and coding, and default models for other roles. You can use this pack by running the REPL with the --gemini-preview flag (plandex --gemini-preview), or with \set-model gemini-preview from inside the REPL. Because this model is still in preview, a fallback to Gemini 1.5 Pro is used on failure.\set-model or a custom model pack.high, medium, and low reasoning effort levels. o3-mini has been replaced by the corresponding o4-mini models across all model packs, with a fallback to o3-mini on failure. This improves Plandex's file edit reliability and performance with no increase in costs. o4-mini-medium is also the new default planning model for the cheap model pack.high, medium, and low reasoning effort levels. Note that if you're using Plandex in BYO key mode, OpenAI requires an organization verification step before you can use o3.strong model pack, replacing o1. Due to the verification requirements for o3, the strong pack falls back to o4-mini-high for planning if o3 is not available.coder role, effectively increasing the context limit for the implementation phase from 200k to 1M tokens.coder model in the cheap model pack, and is also the new main planning and coding model in the openai model pack.reasoning AND strong model packs, reasoning is no longer included by default. This clears up some issues that were caused by output with specific formatting that Plandex takes action on being duplicated between the reasoning and the main output. It also feels a bit more relaxed to keep the reasoning behind-the-scenes, even though there can be a longer wait for the initial output.\billing from the REPL to open the dashboard).See CLI 2.1.0 release notes.
\ command that matches only a single option will default to that command. If multiple commands could match, you'll be given a list of options. For input that begins with a \ but doesn't match any command, there is now a confirmation step. This helps to prevent accidentally sending mistyped commands the model and burning tokens.