/migrations
# move from old versions to new safely
claude-code 1.x → 2.0 [breaking]
Major architecture rewrite with new configuration format
Update installation method: npm install replaced with direct binary download
npm install -g @anthropic-ai/claude-code # Download from GitHub releases
curl -L https://github.com/anthropics/claude-code/releases/latest/download/claude-code-macos -o /usr/local/bin/claude-code
chmod +x /usr/local/bin/claude-code // npm package deprecated in v2.0. Binary installation is now recommended.
Migrate settings.json structure: Permissions model changed to granular tool-based system
{
"autoApprove": true,
"permissions": "all"
} {
"permissions": {
"allow": [
"Read(*)",
"Write(*)",
"Bash(npm *)"
]
}
} // Old autoApprove flag removed. Use permission wildcards instead.
Update CLAUDE.md location: Project instructions moved from root to .claude/
./CLAUDE.md ./.claude/CLAUDE.md // All Claude Code files now live in .claude/ directory
claude-code 2.0 → 2.1 [safe]
Skills system and MCP integration
Add MCP tool configuration: Enable Model Context Protocol servers
# No MCP support # .claude/settings.json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed"]
}
}
} // MCP servers extend Claude with custom tools. Optional feature.
Migrate to new skills directory: Custom commands now use .claude/skills/
.claude/commands/my-command.md .claude/skills/my-skill/SKILL.md // Skills have dedicated directories for better organization
aider 0.50 → 0.60 [safe]
Enhanced chat modes and improved diff formatting
Enable architect mode: New mode for large refactoring tasks
aider --edit-format whole aider --architect // Architect mode uses planning before implementation
Configure model settings file: Persistent model configuration
aider --model gpt-4 --no-stream # .aider.model.settings.yml
model: gpt-4
stream: false // Reduces repetitive CLI flags
aider 0.40 → 0.50 [breaking]
Git integration improvements and breaking config changes
Update auto-commits flag: Flag renamed for clarity
aider --no-auto-commit aider --no-auto-commits // Plural form for consistency
Migrate diff format: Default diff format changed
# Used whole file diffs by default aider --edit-format udiff // udiff is now default. Use --edit-format whole for old behavior.
cline 2.x → 3.0 [breaking]
VSCode extension rewrite with new API integration
Update extension settings: Settings namespace changed
{
"cline.apiKey": "...",
"cline.model": "claude-3-opus"
} {
"cline.api.key": "...",
"cline.api.model": "claude-sonnet-4-5"
} // Settings now nested under api namespace
Configure custom instructions: New global instructions file
# Instructions in VSCode settings # Create .cline/instructions.md in workspace root
# Project-specific instructions
Always use TypeScript strict mode
Prefer functional patterns // Workspace-level instructions override global settings
codex-cli 0.9 → 1.0 [breaking]
Stable release with config file standardization
Rename config file: Configuration moved to standard location
~/.codex/config.yaml ~/.config/codex/config.yaml // Follows XDG Base Directory specification
Update API key environment variable: Standardized environment variable naming
export OPENAI_KEY=sk-... export CODEX_API_KEY=sk-... // Tool-specific env var to avoid conflicts
Migrate model selection syntax: New model specification format
codex --model gpt-4 codex --model openai:gpt-4 // Provider prefix now required for clarity
cursor 0.39 → 0.40 [safe]
Composer agent mode and enhanced context
Enable Composer agent mode: New autonomous coding mode
# Standard chat mode only # Open Composer (Cmd+Shift+I)
# Enable 'Agent' mode in Composer settings // Agent mode allows multi-file edits with minimal supervision
Configure .cursorrules: Project-specific instructions
# No project instructions # .cursorrules
Use Bun instead of npm
Prefer Tailwind utility classes
Always write tests for new features // Cursor reads .cursorrules automatically on project load
cursor 0.35 → 0.39 [breaking]
Model provider changes and API updates
Update model selection: GPT-4 Turbo deprecated
gpt-4-turbo gpt-4o // GPT-4o is faster and more capable than Turbo
gemini-cli 0.1 → 0.2 [safe]
Enhanced prompt caching and multimodal support
Enable prompt caching: Reduce API costs with caching
gemini-cli chat gemini-cli chat --cache // Caching can reduce costs by 90% for repeated context
Configure model preference: Set default model in config
gemini-cli --model gemini-2.0-flash-exp # ~/.gemini-cli/config.json
{
"defaultModel": "gemini-2.5-pro"
} // Gemini 2.5 Pro has better code generation capabilities