Skip to main content

Codex config.toml: Where It Lives, What Wins, and Safe Examples

A
14 min readAI Development Tools

Codex config.toml is layered. Use `~/.codex/config.toml` for personal defaults, `.codex/config.toml` for trusted repo policy, flags or `--config` for one run, and keep secrets out of committed project files.

Codex config.toml: Where It Lives, What Wins, and Safe Examples

Codex config.toml is a layered control file, not one magic file. Use ~/.codex/config.toml for personal defaults, .codex/config.toml for trusted repository policy, CLI flags or --config for one command, and profiles for repeatable CLI presets.

Start with the narrowest layer that owns the setting. A model preference can live in user config, a conservative repo sandbox can live in project config, an experiment should usually be a one-run override, and MCP setup is often safer through codex mcp add before you hand-edit mcp_servers.

Keep secrets, bearer tokens, and auth.json out of committed project files. OpenAI's Codex docs were rechecked on April 21, 2026 for config locations, precedence, MCP options, auth storage, and sandbox behavior; local command examples are tied to the installed codex-cli 0.121.0 help output checked that day.

Start Here: Which Codex Config Layer Should Own the Setting?

The useful first move is not "open a config file." It is "choose the owner."

You want to changeBest ownerWhy
Your default model, approval habit, notification command, or credential store~/.codex/config.tomlThe setting should follow you, not the repository
Repo-specific instructions, conservative sandbox policy, or team MCP defaults.codex/config.toml in a trusted projectThe setting should travel with the project and stay reviewable
A one-off model, sandbox, web-search, or nested settingCLI flag or --configThe experiment should not mutate persistent files
A repeatable CLI mode such as "review" or "fast local edit"[profiles.<name>]The preset can be selected explicitly with --profile
A new MCP servercodex mcp add first, then TOML only when neededThe helper reduces syntax mistakes and keeps env handling explicit

If you are still deciding between subscription, API key, and usage routes, solve that before deep config work. The adjacent Codex API key vs subscription guide separates those billing and feature routes. If the problem is quota rather than configuration, use the Codex usage limits guide.

Precedence: What Wins When Settings Conflict

Textless Codex config.toml precedence ladder with stacked policy layers from overrides to defaults

OpenAI's Codex configuration docs currently describe this precedence order, from strongest to weakest:

RankLayerPractical consequence
1CLI flags and --configA single command can override the files below it
2Profile selected with --profile <name>A named preset can override normal file defaults
3Project .codex/config.tomlTrusted project settings override user settings; the closest project file to the current working directory wins
4User ~/.codex/config.tomlYour personal default layer
5System /etc/codex/config.tomlManaged or machine-wide defaults
6Built-in defaultsWhat Codex uses when no higher layer sets the key

Project config has one important gate: Codex loads .codex/config.toml only for trusted projects. If a project file looks correct but nothing changes, trust and working directory are the first two things to check.

Precedence also explains why a file edit can appear to fail. A profile may still be selected, a shell alias may include --config, or the current command may be running from a subfolder where a closer .codex/config.toml wins.

User Config: Safe Personal Defaults

User config is the right home for preferences that should follow you across projects. Typical examples are model choice, approval habits, sandbox preference, notification behavior, and credential-storage preference.

A small personal file is usually better than a pasted sample:

toml
#:schema https://developers.openai.com/codex/config-schema.json model = "gpt-5.4" approval_policy = "on-request" sandbox_mode = "workspace-write" allow_login_shell = false cli_auth_credentials_store = "keyring"

That file says five things:

  • Use a current Codex-capable model by default.
  • Ask before higher-risk operations.
  • Let Codex write inside the workspace rather than the whole machine.
  • Avoid login shell behavior unless you explicitly need it.
  • Prefer OS keyring storage for CLI authentication credentials.

Do not treat user config as a place to paste access tokens. OpenAI's Codex authentication docs describe auth.json as local auth state that can contain access tokens, and it should not be shared or committed. API keys and bearer tokens belong in environment variables, a credential store, or another private secret manager.

Project Config: Shared Policy, Not Private State

Textless Codex config.toml safety split separating private user defaults from shared project policy

Project config is for policy the repository can honestly own. Good candidates include repo-specific instructions, a conservative sandbox baseline, a required MCP server name, or a setting that prevents teammates from running Codex in the wrong folder.

Keep it boring:

toml
#:schema https://developers.openai.com/codex/config-schema.json approval_policy = "untrusted" sandbox_mode = "workspace-write" allow_login_shell = false developer_instructions = """ Follow this repository's AGENTS.md. Keep generated article assets under public/posts/{lang}/{slug}/img/. Do not commit local credentials, auth state, or provider tokens. """

That is a repo policy. It does not expose a token. It does not grant the entire machine. It tells future runs what the repository expects.

Be much more careful with project-level danger-full-access or approval_policy = "never". OpenAI's sandboxing docs describe danger-full-access as disabling sandboxing, and pairing it with no approval prompts is effectively a high-trust automation mode. That can be acceptable in an already isolated disposable environment. It is a poor default for a shared project file.

If a team needs looser permissions, document the reason beside the setting and make sure the repo context actually justifies it. A local preference can be broad; a committed policy should be narrow.

MCP Servers: Use the Helper Before Hand-Editing TOML

Codex supports Model Context Protocol servers in both the CLI and IDE, and MCP configuration lives in config.toml. The fastest safe path is the CLI helper:

bash
codex mcp add docs --env DOCS_TOKEN="$DOCS_TOKEN" -- node ./mcp/docs-server.mjs

Use the helper when the server is command-based and you want Codex to write the correct shape for the current CLI. Then inspect it:

bash
codex mcp get docs --json

When you do need direct TOML, keep secrets indirect:

toml
[mcp_servers.docs] command = "node" args = ["./mcp/docs-server.mjs"] env_vars = ["DOCS_TOKEN"] startup_timeout_sec = 10 tool_timeout_sec = 60 required = true

For an HTTP server, use an environment-backed bearer token rather than a literal secret:

toml
[mcp_servers.internal_docs] url = "https://mcp.example.com" bearer_token_env_var = "DOCS_MCP_TOKEN" enabled = true required = true

Two rules prevent most MCP config mistakes:

  • Put server credentials in environment variables, not in a committed TOML file.
  • Keep enabled_tools or disabled_tools tight when a server exposes more tools than the article, repo, or task needs.

MCP failures often look like general Codex failures. Before changing models or sandbox policy, check whether the server starts, whether the env var exists in the Codex process environment, whether the working directory is correct, and whether the server was marked required.

Model, Provider, and Base URL Defaults

The root keys model, review_model, model_provider, and provider-related sections control which model route Codex prefers. Use persistent defaults only when the route should be stable.

A normal personal default is simple:

toml
model = "gpt-5.4" review_model = "gpt-5.4"

If you are testing a model for one command, prefer a flag:

bash
codex --model gpt-5.4-mini

The openai_base_url key is useful when the built-in OpenAI provider should route through a proxy, router, or data-residency endpoint. Keep that setting private unless the repository itself owns the route:

toml
openai_base_url = "https://gateway.example.com/v1"

Do not mix this with billing claims. A base URL changes where the built-in provider sends requests. It does not automatically tell the reader whether ChatGPT subscription usage, API-key billing, or workspace policy applies. For that route choice, use the Codex API key vs subscription guide.

Sandbox and Approval Settings

Codex safety has two related knobs:

  • sandbox_mode controls what Codex can touch.
  • approval_policy controls when Codex asks before continuing.

The common sandbox modes are:

ModeUse whenAvoid when
read-onlyCodex should inspect, plan, or review without writingYou expect it to create or patch files
workspace-writeCodex should edit the current workspaceThe task needs broad filesystem or network access
danger-full-accessThe environment is already isolated and the task needs broad accessYou are in a normal personal or shared project

The common approval policies are:

PolicyWhat it implies
untrustedMore actions require approval; useful for unfamiliar projects
on-requestCodex can work normally but asks when it needs more authority
neverCodex should not ask; only safe when the environment and task are already constrained

For most local development, workspace-write with on-request is a reasonable starting point. For a shared project policy, untrusted or a conservative workspace-write setup is easier to defend. For a disposable automation workspace, broader modes can be explicit and isolated.

If you are configuring Codex for visual desktop work rather than file work, the Codex Computer Use guide covers the separate app, permission, and tool-control lane.

One-Run Overrides and Profiles

Do not edit persistent files for every experiment. Codex supports one-run overrides through flags and --config.

Dedicated flags are clearer when they exist:

bash
codex --model gpt-5.4-mini codex --sandbox read-only codex --ask-for-approval on-request

For arbitrary keys, use --config. Values are parsed as TOML, so strings need TOML string quoting:

bash
codex --config model='"gpt-5.4"' codex --config sandbox_workspace_write.network_access=true codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'

Profiles are better when a preset is repeatable:

toml
[profiles.review] model = "gpt-5.4" approval_policy = "on-request" sandbox_mode = "read-only" [profiles.local_edit] model = "gpt-5.4-mini" approval_policy = "on-request" sandbox_mode = "workspace-write"

Then run:

bash
codex --profile review

OpenAI currently describes profiles as experimental and not supported in the IDE extension. That makes profiles a CLI convenience, not a universal Codex workspace policy.

Troubleshooting: Why Your Codex Config Did Not Apply

Textless Codex config.toml troubleshooting flow linking trust, cwd, overrides, profiles, TOML syntax, and MCP checks

Use this order before rewriting the file again:

CheckWhat to inspectWhy it matters
Current commandShell aliases, CLI flags, --config, --profileHigher-precedence inputs may be winning
Current directorypwd, repo root, subfolder config filesThe closest trusted project config wins
Project trustWhether Codex trusts the projectUntrusted project config is skipped
TOML syntaxTables, quoted strings, arrays, schema hintA syntax error can prevent the intended key from loading
Key supportOfficial config reference and local codex --helpKeys and flags can change with CLI versions
MCP servercodex mcp get <name> --jsonThe server stanza may not match the running config
Auth stateauth.json, keyring, env varsAuthentication is adjacent to config, not the same file

The most common mistake is editing the right-looking file from the wrong layer. A user config change can be overridden by a selected profile. A project config can be skipped until the project is trusted. A one-run --config value can make the persistent file look broken even when it is fine.

When the issue is MCP-specific, reduce the server to the smallest working shape. Confirm the command or URL, env var names, cwd if needed, startup timeout, and whether required = true is stopping the run.

Copy-Safe Patterns

Use these as starting points, not as a complete reference.

Personal Default With Conservative Write Access

toml
#:schema https://developers.openai.com/codex/config-schema.json model = "gpt-5.4" approval_policy = "on-request" sandbox_mode = "workspace-write" allow_login_shell = false cli_auth_credentials_store = "keyring"

Review-Only Profile

toml
[profiles.review] model = "gpt-5.4" sandbox_mode = "read-only" approval_policy = "on-request"

Trusted Project Policy

toml
# .codex/config.toml approval_policy = "untrusted" sandbox_mode = "workspace-write" allow_login_shell = false

HTTP MCP Server With Env-Backed Token

toml
[mcp_servers.docs] url = "https://mcp.example.com" bearer_token_env_var = "DOCS_MCP_TOKEN" tool_timeout_sec = 60

The full official reference is still worth keeping open when you add less common keys such as model_context_window, model_auto_compact_token_limit, custom model_providers, apps, agents, memories, or granular approval fields.

FAQ

Where is the Codex configuration file?

The user file is ~/.codex/config.toml. Trusted projects can also use .codex/config.toml. Managed machines may have /etc/codex/config.toml.

Does the Codex IDE extension use the same config?

OpenAI's Codex docs describe CLI and IDE config as sharing the same layers for the main config file. Profiles are the important caveat: OpenAI currently marks profiles as experimental and not supported in the IDE extension.

Should API keys go in config.toml?

No. Keep API keys, bearer tokens, and access tokens out of committed config. Use environment variables, OS keyring, private user config where appropriate, or the authentication flow Codex expects. Do not commit auth.json.

Why did my project .codex/config.toml not apply?

Check trust first. Project config only loads for trusted projects. Then check current working directory, closer subfolder config files, selected profiles, CLI flags, and one-run --config overrides.

When should I use --config instead of editing the file?

Use --config for experiments, one-off model changes, temporary sandbox tweaks, or nested keys you do not want to persist. If the value becomes a normal habit, move it into user config or a profile.

Is danger-full-access ever appropriate?

Only when the environment is already isolated and the task needs broad authority. It should not be a casual default in a shared repo. For normal work, start with workspace-write plus an approval policy that can stop risky actions.

How do I check current MCP config?

Use the MCP CLI helper. On the checked local install, codex mcp get <name> --json is available, and codex mcp add supports stdio and URL-based server setup. Treat that as installed-version syntax and verify codex mcp --help on your machine.

Where is the official schema?

OpenAI publishes the Codex config schema at https://developers.openai.com/codex/config-schema.json. Add #:schema https://developers.openai.com/codex/config-schema.json near the top of TOML files if your editor can use schema hints.

Share:

laozhang.ai

One API, All AI Models

AI Image

Gemini 3 Pro Image

$0.05/img
80% OFF
AI Video

Sora 2 · Veo 3.1

$0.15/video
Async API
AI Chat

GPT · Claude · Gemini

200+ models
Official Price
Served 100K+ developers
|@laozhang_cn|Get $0.1