If you are asking for a Codex daily token limit, the useful correction is that OpenAI does not publish one universal per-day token number for Codex. The current answer depends on which contract you are actually using: included ChatGPT-plan usage, a credit-based Business or Enterprise seat, or an API key with usage-based billing.
That split matters because the limits are described in different ways. Plus and Pro currently use five-hour and weekly activity windows rather than one daily token cap. Some Business and Enterprise seats now use token-based credits after the April 2, 2026 pricing change. API-key usage is a separate contract again, with token pricing, model context limits, and tiered RPM or TPM limits.
So the first thing to check is not one more copied quota screenshot. Check the Codex usage dashboard or /status, confirm whether you are looking at a subscriber plan, a workspace credit contract, or API billing, and only then compare the current numbers that apply to that branch.
Evidence note: OpenAI's Codex help page, pricing page, rate card, GPT-5.3-Codex API docs, and the April 2, 2026 team-pricing post were rechecked on April 8, 2026, because these limits and pricing contracts are still moving.
Start Here: Which Codex Limit Contract Applies to You?
Before you compare plans or repeat someone else's quota estimate, sort yourself into the right contract. The phrase "Codex token limit per day" sounds like one product question, but OpenAI currently answers it through three different systems.
| If you are using | What actually governs limits | First place to check | What usually happens after you hit the included limit |
|---|---|---|---|
| ChatGPT Plus or Pro | Included usage windows by activity and model | Codex usage dashboard and /status | You can buy additional credits, or move some extra local tasks to an API key |
| ChatGPT Business or Enterprise | Either included workspace limits or token-based credits, depending on seat type and migration status | Workspace contract plus the current Codex pricing or rate-card surface | Flexible-pricing workspaces can buy more workspace credits |
| An API key | Token pricing, context window, and account-tier RPM or TPM limits | API model docs and account-tier rate-limit surface | You are already on usage-based billing, so the question becomes throughput and spend, not included headroom |
The practical consequence is simple. There is no one number worth memorizing until you know which branch you are in. If you are on Plus or Pro, the right answer is not a token cap at all. If you are on a team seat, the answer may depend on whether your workspace still uses legacy included limits or already uses token-based credits. If you are on an API key, the useful contract is throughput and token pricing, not a subscriber-style quota chart.
This is also why the article needs to be narrower than a general Codex overview. If what you really want is the broader product picture, read our OpenAI Codex in March 2026 guide. This page is only about limits, monitoring surfaces, and what changes after the limit starts to matter.
What Plus and Pro Actually Include Right Now

For Plus and Pro, OpenAI's current Codex pricing page does not publish one daily token cap. It publishes ranges in five-hour windows for local messages and cloud tasks, plus weekly limits for code reviews. That is the first correction most people need.
As of April 8, 2026, the current pricing page shows this local-message shape:
| Model | Plus local messages / 5h | Pro local messages / 5h | Business local messages / 5h |
|---|---|---|---|
| GPT-5.4 | 33 to 168 | 223 to 1120 | 15 to 60 |
| GPT-5.4-mini | 110 to 560 | 743 to 3733 | 40 to 200 |
| GPT-5.3-Codex | 45 to 225 | 300 to 1500 | 20 to 90 |
The same page also lists separate cloud-task and code-review limits. For GPT-5.3-Codex, the current official anchors are 10 to 60 cloud tasks per 5 hours and 10 to 25 code reviews per week on Plus, 50 to 400 cloud tasks per 5 hours and 100 to 250 code reviews per week on Pro, and 5 to 40 cloud tasks per 5 hours plus 15 to 30 code reviews per week on Business. OpenAI is already framing Codex usage by model and activity, not by one fixed pool that every surface burns the same way.
The most important interpretation is not the raw range itself. It is what the range means. OpenAI says the actual amount you get depends on task size, complexity, and whether work runs locally or in the cloud. So even on Plus or Pro, the safe mental model is not "I get X messages a day." The safe model is "I get a current five-hour usage window whose effective size changes with the kind of work I ask Codex to do."
That matters when people compare their experience to screenshots on social platforms or old blog posts. If one person was doing short local tasks on GPT-5.4-mini and another was running heavier cloud work on GPT-5.3-Codex, the numbers can both be real without describing the same contract. Treat the official range as a dated shape, not as a promise that every session will land at the same point inside it.
Business appears in the pricing table too, but do not read that as proof that all Business seats are just bigger or smaller versions of Plus and Pro. Those Business numbers are a snapshot of the included-usage branch, not a universal truth about every team seat. That is where the April 2026 pricing shift starts to matter.
Why Business and Enterprise Docs Now Mix Messages and Tokens

If current Codex docs feel inconsistent, that is not just bad wording. OpenAI is actively running more than one contract shape at once.
The current Codex rate card says that, as of April 2, 2026, new and existing ChatGPT Business customers and new ChatGPT Enterprise customers use a token-based credit model for Codex instead of the older per-message approach. But the same rate card also says that new and existing Plus or Pro users, along with existing Enterprise and Edu customers, remain on the legacy included contract until migration finishes.
That is the reason one OpenAI page can look message-based while another looks token-based. Both can be true. They are just true for different seat types and migration states.
This is where many readers make the wrong leap. They see a Business plan table on the pricing page, then see token-based credits on the rate card, and conclude that OpenAI must be contradicting itself. The cleaner reading is that "Business" and "Enterprise" are no longer enough by themselves. You also need to know whether you are looking at a standard ChatGPT seat with included usage, a flexible-pricing workspace, or one of the newer token-metered Codex seat setups. For example, the current included-usage Business snapshot on the pricing page still shows 20 to 90 GPT-5.3-Codex local messages per five hours, but that is not the same thing as the token-metered Codex-only seat contract described in the April 2 update.
OpenAI's April 2, 2026 product post makes that split even sharper. It says Codex-only seats for Business and Enterprise can have no fixed rate limits, with usage billed on token consumption instead. That is not the same claim as "all Business seats are unlimited." It is a seat-type statement, not a global truth about every team account.
So if you are on a workspace and trying to answer "What is my Codex daily limit?", the right question is narrower: are you on legacy included usage, a flexible-pricing workspace that can buy more credits, or a token-metered team seat where fixed rate limits no longer describe the contract at all? Once you ask that question, the OpenAI docs stop looking contradictory and start looking segmented.
What Changes If You Use an API Key
The API branch is a different contract again. OpenAI's GPT-5.3-Codex API docs currently describe:
- 400,000 tokens of context
- 128,000 max output tokens
- per-token pricing
- account-tier rate limits measured in RPM and TPM
That is why an API key should not be read through a subscriber-plan lens. If you are on the API, the useful limits are throughput, context, output, and spend. There is still no universal daily token number that tells the whole story, because OpenAI is not framing the API around a single daily cap.
The tiered rate-limit table makes this concrete. As of April 8, 2026, the GPT-5.3-Codex model page shows:
- Tier 1: 500 RPM and 500,000 TPM
- Tier 2: 5,000 RPM and 1,000,000 TPM
- Tier 3: 5,000 RPM and 2,000,000 TPM
- Tier 4: 10,000 RPM and 4,000,000 TPM
- Tier 5: 15,000 RPM and 40,000,000 TPM
That does not mean every API user can instantly consume Tier 5 throughput. It means the governing model is account tier plus token throughput, not one subscriber-style quota wall.
The reader consequence is important. If you are trying to estimate whether Codex can support burstier local tasks after your included plan usage runs out, an API key can be the right overflow route because it turns the conversation into usage-based billing. But it does not make the contract disappear. You are trading included limits for spend clarity and explicit rate buckets.
If your next question is how to start the API side as cheaply as possible, read our OpenAI API key free trial guide. That page is about onboarding and cheapest legitimate entry. This page is about why the API branch does not answer the same question as your ChatGPT plan usage.
What To Check After You Hit the Limit

Once you know which surface you are on, the post-limit decision gets much simpler.
For Plus and Pro, OpenAI's current pricing page says you can buy additional credits after you hit included usage limits. It also says that switching some work to GPT-5.4-mini can help included limits last longer. That makes the default overflow logic much cleaner than the older "wait or upgrade" framing. If the interruption is temporary, additional credits may be enough. If the work is light but frequent, changing the model mix may buy time before you spend more.
For Business, Edu, and Enterprise plans with flexible pricing, OpenAI says you can buy additional workspace credits. That is a workspace-level answer, not a personal-subscriber answer. If you are on a team contract, check whether the workspace is already designed to handle overflow through credits before you assume you need an API workaround.
For extra local tasks, OpenAI also explicitly points to running them with an API key at standard API rates. That route is useful when you want more predictable usage-based economics for overflow instead of expanding an included plan. It is not automatically better. It is better when the workload shape fits API economics more naturally than subscriber headroom.
The other thing to check early is the official monitor. OpenAI's current help page says the right place to watch Codex plan usage is the usage dashboard and /status. That sounds mundane, but it solves a lot of false certainty. People often compare the wrong screen, the wrong plan table, or the wrong product surface, then conclude that OpenAI must be hiding one true universal quota. In practice, the faster fix is usually to look at the correct surface sooner.
The Simplest Current Mental Model
If you want one rule of thumb that survives most doc drift, use this one:
Codex limits are currently a contract question, not a single-number question.
On Plus and Pro, think in five-hour and weekly activity windows. On Business and Enterprise, think in seat type, migration state, and whether the workspace uses credits. On the API, think in token pricing, context, output, and account-tier throughput. If you blur those together, every page will look inconsistent. If you separate them, the current OpenAI docs make much more sense.
That is also why this page deserves to exist next to our other Codex articles instead of duplicating them. The March 2026 overview explains what Codex became as a product. The API key guide explains how to start the API path cheaply. This page answers the narrower operational question that people actually search in messy language: not "what can Codex do?" but "which limit contract am I actually on, and what should I check next?"
FAQ
Does OpenAI Codex have a daily token limit?
Not one universal one. Current OpenAI pages describe Codex through plan-specific five-hour and weekly activity windows, team credit contracts, or API-key throughput and pricing tiers. The right answer depends on the branch you are using.
Are Plus and Pro limits really daily?
No. OpenAI's current Codex pricing page describes current Plus and Pro usage in five-hour windows for local and cloud activity, plus weekly review limits. That is different from a single daily token cap.
Why do Business and Enterprise pages talk about both messages and tokens?
Because OpenAI is running both contract shapes at once. The April 2, 2026 rate-card change moved new Business and new Enterprise customers toward token-based credits, while some legacy included contracts still remain in place during migration.
Does an API key use the same limits as my ChatGPT plan?
No. The API branch has its own contract: token pricing, context and output limits, and account-tier RPM or TPM limits. It should not be read as if it were your subscriber-plan quota.
Where should I check current usage first?
For included Codex plan usage, start with the Codex usage dashboard and /status. Those are the official surfaces OpenAI points to for current usage monitoring.
What is the cleanest next move after I hit the limit?
That depends on the contract. Plus and Pro users can buy additional credits. Flexible-pricing workspaces can buy more workspace credits. Some extra local tasks can move to an API key at standard API rates. The safe first move is to identify the branch correctly before you buy anything.
Ownable synthesis: The useful answer to "Codex token limit per day" is not one quota number. It is that OpenAI Codex now uses three different limit contracts, and the first real step is to identify which one you are actually on.
