← Back to all tutorials
OpenAI CodexEpisode 5

CLI Commands & Resuming Sessions

Learn how to manage Codex sessions — list past sessions, resume interrupted work, use dry-run mode for safe testing, and select models for different tasks.

Session Management

Every time you run Codex, it creates a session that records the context — which files were read, what changes were made, and the full conversation history. This means you can always pick up where you left off.

Viewing Past Sessions

If you accidentally close your terminal or need to revisit previous work, list your session history:

codex sessions list

This outputs a table of previous sessions with their IDs, starting prompts, and timestamps:

ID            Started              Prompt
──────────────────────────────────────────────────────────
abc123def4    2026-03-01 14:30     "Add pagination to the API"
xyz789ghi0    2026-03-01 11:15     "Fix auth middleware bug"
mno456pqr1    2026-02-28 09:45     "Refactor database layer"

Resuming a Session

To continue a previous session with full context intact:

# Resume by session ID
codex resume abc123def4

The agent retains complete context of everything it read and modified in the original session. This is especially useful for:

  • Continuing multi-step refactoring work
  • Following up on a review with additional changes
  • Recovering from an accidental terminal close

Dry-Run Mode

Want to test a prompt without actually modifying any files? Use the --dry-run flag:

# See what changes would be made without applying them
codex do "Refactor the API to use async/await" --dry-run

Dry-run mode shows you exactly what the agent would do — which files it would edit, what code it would write, and what commands it would run — without touching anything.

💡 Tip: Always use --dry-run first when running destructive operations like database migrations or file deletions.

Model Selection

Different tasks benefit from different AI models. Override the default model using the --model flag:

# Use a faster model for simple tasks
codex do "Add console.log statements for debugging" --model gpt-4.1-mini

# Use the most powerful model for complex reasoning
codex do "Solve this dynamic programming algorithm" --model o3

# Use the default model (balanced speed and quality)
codex do "Write unit tests for the auth module"

When to Use Each Model

ModelBest ForSpeed
gpt-4.1-miniSimple edits, formatting, docs⚡ Fast
gpt-4.1General coding (default)🔄 Balanced
o3Complex algorithms, architecture🐢 Slower but smarter

What's Next

In the next episode, we'll learn about the AGENTS.md file — the secret weapon that keeps Codex aligned with your team's coding standards, framework choices, and architectural decisions.