Codex config.toml is a layered control file, not one magic file. Use ~/.codex/config.toml for personal defaults, .codex/config.toml for trusted repository policy, CLI flags or --config for one command, and profiles for repeatable CLI presets.
Start with the narrowest layer that owns the setting. A model preference can live in user config, a conservative repo sandbox can live in project config, an experiment should usually be a one-run override, and MCP setup is often safer through codex mcp add before you hand-edit mcp_servers.
Keep secrets, bearer tokens, and auth.json out of committed project files. OpenAI's Codex docs were rechecked on April 21, 2026 for config locations, precedence, MCP options, auth storage, and sandbox behavior; local command examples are tied to the installed codex-cli 0.121.0 help output checked that day.
Start Here: Which Codex Config Layer Should Own the Setting?
The useful first move is not "open a config file." It is "choose the owner."
| You want to change | Best owner | Why |
|---|---|---|
| Your default model, approval habit, notification command, or credential store | ~/.codex/config.toml | The setting should follow you, not the repository |
| Repo-specific instructions, conservative sandbox policy, or team MCP defaults | .codex/config.toml in a trusted project | The setting should travel with the project and stay reviewable |
| A one-off model, sandbox, web-search, or nested setting | CLI flag or --config | The experiment should not mutate persistent files |
| A repeatable CLI mode such as "review" or "fast local edit" | [profiles.<name>] | The preset can be selected explicitly with --profile |
| A new MCP server | codex mcp add first, then TOML only when needed | The helper reduces syntax mistakes and keeps env handling explicit |
If you are still deciding between subscription, API key, and usage routes, solve that before deep config work. The adjacent Codex API key vs subscription guide separates those billing and feature routes. If the problem is quota rather than configuration, use the Codex usage limits guide.
Precedence: What Wins When Settings Conflict

OpenAI's Codex configuration docs currently describe this precedence order, from strongest to weakest:
| Rank | Layer | Practical consequence |
|---|---|---|
| 1 | CLI flags and --config | A single command can override the files below it |
| 2 | Profile selected with --profile <name> | A named preset can override normal file defaults |
| 3 | Project .codex/config.toml | Trusted project settings override user settings; the closest project file to the current working directory wins |
| 4 | User ~/.codex/config.toml | Your personal default layer |
| 5 | System /etc/codex/config.toml | Managed or machine-wide defaults |
| 6 | Built-in defaults | What Codex uses when no higher layer sets the key |
Project config has one important gate: Codex loads .codex/config.toml only for trusted projects. If a project file looks correct but nothing changes, trust and working directory are the first two things to check.
Precedence also explains why a file edit can appear to fail. A profile may still be selected, a shell alias may include --config, or the current command may be running from a subfolder where a closer .codex/config.toml wins.
User Config: Safe Personal Defaults
User config is the right home for preferences that should follow you across projects. Typical examples are model choice, approval habits, sandbox preference, notification behavior, and credential-storage preference.
A small personal file is usually better than a pasted sample:
toml#:schema https://developers.openai.com/codex/config-schema.json model = "gpt-5.4" approval_policy = "on-request" sandbox_mode = "workspace-write" allow_login_shell = false cli_auth_credentials_store = "keyring"
That file says five things:
- Use a current Codex-capable model by default.
- Ask before higher-risk operations.
- Let Codex write inside the workspace rather than the whole machine.
- Avoid login shell behavior unless you explicitly need it.
- Prefer OS keyring storage for CLI authentication credentials.
Do not treat user config as a place to paste access tokens. OpenAI's Codex authentication docs describe auth.json as local auth state that can contain access tokens, and it should not be shared or committed. API keys and bearer tokens belong in environment variables, a credential store, or another private secret manager.
Project Config: Shared Policy, Not Private State

Project config is for policy the repository can honestly own. Good candidates include repo-specific instructions, a conservative sandbox baseline, a required MCP server name, or a setting that prevents teammates from running Codex in the wrong folder.
Keep it boring:
toml#:schema https://developers.openai.com/codex/config-schema.json approval_policy = "untrusted" sandbox_mode = "workspace-write" allow_login_shell = false developer_instructions = """ Follow this repository's AGENTS.md. Keep generated article assets under public/posts/{lang}/{slug}/img/. Do not commit local credentials, auth state, or provider tokens. """
That is a repo policy. It does not expose a token. It does not grant the entire machine. It tells future runs what the repository expects.
Be much more careful with project-level danger-full-access or approval_policy = "never". OpenAI's sandboxing docs describe danger-full-access as disabling sandboxing, and pairing it with no approval prompts is effectively a high-trust automation mode. That can be acceptable in an already isolated disposable environment. It is a poor default for a shared project file.
If a team needs looser permissions, document the reason beside the setting and make sure the repo context actually justifies it. A local preference can be broad; a committed policy should be narrow.
MCP Servers: Use the Helper Before Hand-Editing TOML
Codex supports Model Context Protocol servers in both the CLI and IDE, and MCP configuration lives in config.toml. The fastest safe path is the CLI helper:
bashcodex mcp add docs --env DOCS_TOKEN="$DOCS_TOKEN" -- node ./mcp/docs-server.mjs
Use the helper when the server is command-based and you want Codex to write the correct shape for the current CLI. Then inspect it:
bashcodex mcp get docs --json
When you do need direct TOML, keep secrets indirect:
toml[mcp_servers.docs] command = "node" args = ["./mcp/docs-server.mjs"] env_vars = ["DOCS_TOKEN"] startup_timeout_sec = 10 tool_timeout_sec = 60 required = true
For an HTTP server, use an environment-backed bearer token rather than a literal secret:
toml[mcp_servers.internal_docs] url = "https://mcp.example.com" bearer_token_env_var = "DOCS_MCP_TOKEN" enabled = true required = true
Two rules prevent most MCP config mistakes:
- Put server credentials in environment variables, not in a committed TOML file.
- Keep
enabled_toolsordisabled_toolstight when a server exposes more tools than the article, repo, or task needs.
MCP failures often look like general Codex failures. Before changing models or sandbox policy, check whether the server starts, whether the env var exists in the Codex process environment, whether the working directory is correct, and whether the server was marked required.
Model, Provider, and Base URL Defaults
The root keys model, review_model, model_provider, and provider-related sections control which model route Codex prefers. Use persistent defaults only when the route should be stable.
A normal personal default is simple:
tomlmodel = "gpt-5.4" review_model = "gpt-5.4"
If you are testing a model for one command, prefer a flag:
bashcodex --model gpt-5.4-mini
The openai_base_url key is useful when the built-in OpenAI provider should route through a proxy, router, or data-residency endpoint. Keep that setting private unless the repository itself owns the route:
tomlopenai_base_url = "https://gateway.example.com/v1"
Do not mix this with billing claims. A base URL changes where the built-in provider sends requests. It does not automatically tell the reader whether ChatGPT subscription usage, API-key billing, or workspace policy applies. For that route choice, use the Codex API key vs subscription guide.
Sandbox and Approval Settings
Codex safety has two related knobs:
sandbox_modecontrols what Codex can touch.approval_policycontrols when Codex asks before continuing.
The common sandbox modes are:
| Mode | Use when | Avoid when |
|---|---|---|
read-only | Codex should inspect, plan, or review without writing | You expect it to create or patch files |
workspace-write | Codex should edit the current workspace | The task needs broad filesystem or network access |
danger-full-access | The environment is already isolated and the task needs broad access | You are in a normal personal or shared project |
The common approval policies are:
| Policy | What it implies |
|---|---|
untrusted | More actions require approval; useful for unfamiliar projects |
on-request | Codex can work normally but asks when it needs more authority |
never | Codex should not ask; only safe when the environment and task are already constrained |
For most local development, workspace-write with on-request is a reasonable starting point. For a shared project policy, untrusted or a conservative workspace-write setup is easier to defend. For a disposable automation workspace, broader modes can be explicit and isolated.
If you are configuring Codex for visual desktop work rather than file work, the Codex Computer Use guide covers the separate app, permission, and tool-control lane.
One-Run Overrides and Profiles
Do not edit persistent files for every experiment. Codex supports one-run overrides through flags and --config.
Dedicated flags are clearer when they exist:
bashcodex --model gpt-5.4-mini codex --sandbox read-only codex --ask-for-approval on-request
For arbitrary keys, use --config. Values are parsed as TOML, so strings need TOML string quoting:
bashcodex --config model='"gpt-5.4"' codex --config sandbox_workspace_write.network_access=true codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'
Profiles are better when a preset is repeatable:
toml[profiles.review] model = "gpt-5.4" approval_policy = "on-request" sandbox_mode = "read-only" [profiles.local_edit] model = "gpt-5.4-mini" approval_policy = "on-request" sandbox_mode = "workspace-write"
Then run:
bashcodex --profile review
OpenAI currently describes profiles as experimental and not supported in the IDE extension. That makes profiles a CLI convenience, not a universal Codex workspace policy.
Troubleshooting: Why Your Codex Config Did Not Apply

Use this order before rewriting the file again:
| Check | What to inspect | Why it matters |
|---|---|---|
| Current command | Shell aliases, CLI flags, --config, --profile | Higher-precedence inputs may be winning |
| Current directory | pwd, repo root, subfolder config files | The closest trusted project config wins |
| Project trust | Whether Codex trusts the project | Untrusted project config is skipped |
| TOML syntax | Tables, quoted strings, arrays, schema hint | A syntax error can prevent the intended key from loading |
| Key support | Official config reference and local codex --help | Keys and flags can change with CLI versions |
| MCP server | codex mcp get <name> --json | The server stanza may not match the running config |
| Auth state | auth.json, keyring, env vars | Authentication is adjacent to config, not the same file |
The most common mistake is editing the right-looking file from the wrong layer. A user config change can be overridden by a selected profile. A project config can be skipped until the project is trusted. A one-run --config value can make the persistent file look broken even when it is fine.
When the issue is MCP-specific, reduce the server to the smallest working shape. Confirm the command or URL, env var names, cwd if needed, startup timeout, and whether required = true is stopping the run.
Copy-Safe Patterns
Use these as starting points, not as a complete reference.
Personal Default With Conservative Write Access
toml#:schema https://developers.openai.com/codex/config-schema.json model = "gpt-5.4" approval_policy = "on-request" sandbox_mode = "workspace-write" allow_login_shell = false cli_auth_credentials_store = "keyring"
Review-Only Profile
toml[profiles.review] model = "gpt-5.4" sandbox_mode = "read-only" approval_policy = "on-request"
Trusted Project Policy
toml# .codex/config.toml approval_policy = "untrusted" sandbox_mode = "workspace-write" allow_login_shell = false
HTTP MCP Server With Env-Backed Token
toml[mcp_servers.docs] url = "https://mcp.example.com" bearer_token_env_var = "DOCS_MCP_TOKEN" tool_timeout_sec = 60
The full official reference is still worth keeping open when you add less common keys such as model_context_window, model_auto_compact_token_limit, custom model_providers, apps, agents, memories, or granular approval fields.
FAQ
Where is the Codex configuration file?
The user file is ~/.codex/config.toml. Trusted projects can also use .codex/config.toml. Managed machines may have /etc/codex/config.toml.
Does the Codex IDE extension use the same config?
OpenAI's Codex docs describe CLI and IDE config as sharing the same layers for the main config file. Profiles are the important caveat: OpenAI currently marks profiles as experimental and not supported in the IDE extension.
Should API keys go in config.toml?
No. Keep API keys, bearer tokens, and access tokens out of committed config. Use environment variables, OS keyring, private user config where appropriate, or the authentication flow Codex expects. Do not commit auth.json.
Why did my project .codex/config.toml not apply?
Check trust first. Project config only loads for trusted projects. Then check current working directory, closer subfolder config files, selected profiles, CLI flags, and one-run --config overrides.
When should I use --config instead of editing the file?
Use --config for experiments, one-off model changes, temporary sandbox tweaks, or nested keys you do not want to persist. If the value becomes a normal habit, move it into user config or a profile.
Is danger-full-access ever appropriate?
Only when the environment is already isolated and the task needs broad authority. It should not be a casual default in a shared repo. For normal work, start with workspace-write plus an approval policy that can stop risky actions.
How do I check current MCP config?
Use the MCP CLI helper. On the checked local install, codex mcp get <name> --json is available, and codex mcp add supports stdio and URL-based server setup. Treat that as installed-version syntax and verify codex mcp --help on your machine.
Where is the official schema?
OpenAI publishes the Codex config schema at https://developers.openai.com/codex/config-schema.json. Add #:schema https://developers.openai.com/codex/config-schema.json near the top of TOML files if your editor can use schema hints.
