GPT-Image-2 API now has two different answers, depending on who owns the route. For first-party OpenAI work, the April 21 official-doc check did not verify a public OpenAI gpt-image-2 model row, so start with the documented GPT Image routes: Image API for direct generation and edits, or the Responses API image tool when image generation sits inside a larger assistant flow.
For laozhang.ai, the provider route is available now: gpt-image-2 API access at $0.03 per call. That is useful when you want OpenAI-compatible access, payment convenience, or a supported gateway, but it is not the same thing as OpenAI publishing an official gpt-image-2 model row or official OpenAI price.
| Route | Use it when | Stop rule |
|---|---|---|
| OpenAI Image API | You need direct first-party image generation or editing. | Use the public GPT Image model names OpenAI documents, not a provider alias. |
| Responses API image tool | Image generation is part of a broader assistant, tool, or multi-step workflow. | Keep image generation inside the Responses flow only when orchestration matters. |
| laozhang.ai provider route | You want gpt-image-2 API access through laozhang.ai at $0.03/call. | Treat it as a laozhang.ai provider contract, not OpenAI first-party proof; record endpoint, billing unit, and failure-charge behavior before traffic. |
LaoZhang.ai tested call matrix
The LaoZhang.ai route uses the OpenAI-compatible base URL:
bashexport LAOZHANG_API_KEY="sk-your-key" export BASE_URL="https://api.laozhang.ai/v1" export MODEL="gpt-image-2"
Do not use a model-level URL such as /v1beta/models/...:generateContent. Use the gateway root, then call /chat/completions, /images/generations, or /images/edits.
| Scenario | Endpoint | Input shape | Default return | response_format=url | Tested |
|---|---|---|---|---|---|
| Text to image | /v1/chat/completions | JSON messages | Markdown image URL in choices[0].message.content | Not tested | Pass |
| Image edit | /v1/chat/completions | image_url.url with a CDN image URL | Markdown image URL in choices[0].message.content | Not tested | Pass |
| Image edit | /v1/chat/completions | image_url.url with a base64 data URL | Markdown image URL in choices[0].message.content | Not tested | Pass |
| Text to image | /v1/images/generations | JSON prompt | data[0].b64_json | data[0].url | Pass |
| Image edit | /v1/images/edits | multipart image upload | data[0].b64_json | data[0].url | Pass |
The key parsing detail is that the Images API b64_json value was observed as a full data URL, such as data:image/png;base64,.... Strip the MIME prefix before decoding.
Quick route answer
The safest default is still to choose the route owner first.
Use OpenAI direct when first-party support, model documentation, billing traceability, and policy clarity matter more than a flat provider price. In current public OpenAI docs, the callable GPT Image family is documented through names such as gpt-image-1.5, gpt-image-1, and gpt-image-1-mini, not through a standalone public gpt-image-2 row.
Use the Responses API image-generation tool when image output is part of a larger model workflow. If your app needs text reasoning, tool calls, state, or a multi-step assistant flow around the image request, Responses can keep that orchestration inside one API surface.
Use laozhang.ai as a provider contract. That route now matters because it gives gpt-image-2 API access at $0.03/call through laozhang.ai and has tested text-to-image plus image-edit coverage across Chat Completions and Images API shapes. The key is to label it correctly: provider route first, not official OpenAI proof.
Chat Completions route
The Chat Completions route is useful when you want the image result returned as a Markdown image link in the assistant message.
bashcurl -X POST "$BASE_URL/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LAOZHANG_API_KEY" \ --data-raw '{ "model": "gpt-image-2", "messages": [ { "role": "user", "content": "Generate a simple validation image: a white mug on a gray table with a blue sticker that says BASE-17." } ], "stream": false }'
The tested response shape puts the image URL inside choices[0].message.content as Markdown:
json{ "choices": [ { "message": { "role": "assistant", "content": "\n\n" }, "finish_reason": "stop" } ] }
For image editing through Chat Completions, pass image input inside content as type: "image_url". Both CDN URLs and base64 data URLs were tested.
json{ "model": "gpt-image-2", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Use the provided image as the source. Keep the mug, desk, camera angle, and BASE-17 text. Only change the blue sticker to red and add a thin gold rim to the cup." }, { "type": "image_url", "image_url": { "url": "https://example.com/source.png" } } ] } ], "stream": false }
For base64 input, set image_url.url to a full data URL:
bashIMAGE_DATA_URL="data:image/png;base64,PUT_BASE64_HERE"
Images API route
The Images API route is better when you want the OpenAI-compatible /images/generations or /images/edits shape.
For text-to-image:
bashcurl -X POST "$BASE_URL/images/generations" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LAOZHANG_API_KEY" \ --data-raw '{ "model": "gpt-image-2", "prompt": "Generate a clean validation image: a white card on a wooden desk with IMG-42 printed in the lower right corner." }'
The default tested return is data[0].b64_json:
json{ "data": [ { "b64_json": "data:image/png;base64,..." } ], "created": 1776789852 }
Set response_format to url when you want data[0].url instead:
bashcurl -X POST "$BASE_URL/images/generations" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $LAOZHANG_API_KEY" \ --data-raw '{ "model": "gpt-image-2", "prompt": "Generate a clean validation image: a white card on a wooden desk with IMG-42 printed in the lower right corner.", "response_format": "url" }'
For image editing, use multipart upload:
bashcurl -X POST "$BASE_URL/images/edits" \ -H "Authorization: Bearer $LAOZHANG_API_KEY" \ -F "model=gpt-image-2" \ -F "prompt=Use the provided image as the source. Keep the desk, card, and IMG-42 label; only add a thin red border to the card." \ -F "image=@source.png"
The same endpoint can return a URL:
bashcurl -X POST "$BASE_URL/images/edits" \ -H "Authorization: Bearer $LAOZHANG_API_KEY" \ -F "model=gpt-image-2" \ -F "prompt=Use the provided image as the source. Keep the desk, card, and IMG-42 label; only add a thin blue border to the card." \ -F "image=@source.png" \ -F "response_format=url"
In the April 22 test, response_format was confirmed on /v1/images/generations and /v1/images/edits. It was not tested on /v1/chat/completions.
Parsing the returns
For Chat Completions, extract the Markdown image link:
pythonimport re content = response["choices"][0]["message"]["content"] match = re.search(r"!\[[^\]]*\]\((https?://[^)\s]+)\)", content) image_url = match.group(1) if match else None
For Images API b64_json, handle the data URL prefix before decoding:
pythonimport base64 value = response["data"][0]["b64_json"] if value.startswith("data:"): value = value.split(",", 1)[1] with open("output.png", "wb") as f: f.write(base64.b64decode(value))
For Images API URL output:
pythonimage_url = response["data"][0]["url"]
Test edits with a small, visible change. A good edit prompt says: keep the subject, composition, camera angle, and key text; change only one visible detail. That makes it easier to tell real image editing from a new text-to-image generation that merely resembles the source.
What official OpenAI docs prove today

For official OpenAI work, the proof standard is first-party material: developer docs, endpoint specs, model lists, pricing rows, or official release notes. Provider pages and community posts can explain market demand, but they do not define what a normal OpenAI API account can call.
The current public OpenAI image-generation guide points developers to two main routes:
| Official route | Best fit | What to check |
|---|---|---|
| Image API | Direct image generation, edits, and variations | Supported model names, image input rules, size, quality, output format, and organization verification requirements |
| Responses API image tool | Assistant workflows where image generation is one step in a broader interaction | Tool configuration, response handling, stored state, cost tracking, and whether the workflow really needs orchestration |
The important detail is that official OpenAI naming controls the code path. A page can call a product “GPT Image 2,” “GPT-Image-2,” or “latest image model,” but production code should use the model ID and endpoint that your actual route accepts.
That rule protects two things. First, it prevents a deploy from failing because the model string is not accepted by OpenAI's public API. Second, it keeps analytics honest. If your app calls a provider alias, your logs should say that. If your app calls OpenAI directly, your logs should say that instead.
Image API or Responses API?
The Image API is the cleaner route when the image is the product output. A user gives a prompt, a reference image, or an edit instruction, and the application returns an image. That workflow benefits from a direct image endpoint because the request, cost, retry, and output handling stay easy to reason about.
The Responses API image tool is better when image generation is only one part of a larger model interaction. For example, a design assistant may read a user brief, ask a clarifying question, call other tools, then generate an image. A coding assistant might decide when a diagram is needed as part of a broader response. In that shape, the image tool is part of the model's plan, not just a separate image call.
The practical split looks like this:
| Workflow | Better starting route | Reason |
|---|---|---|
| One prompt to one generated image | Image API | Direct, simpler to test, easier to cost |
| Image edit with uploaded reference | Image API | Clear image input and output handling |
| Assistant decides whether image generation is needed | Responses API image tool | Keeps reasoning and tool use in one flow |
| App needs text explanation plus generated image | Responses API image tool | The image is part of a multi-step response |
If you are only trying to “use GPT-Image-2 API,” start by describing the workflow instead. The right route follows from the product job. The model label comes after that.
Where laozhang.ai fits
laozhang.ai belongs in the provider-route lane for this topic. It now offers a gpt-image-2 API route at $0.03/call, so it can be a practical option when the reader's problem is access, payment, integration convenience, or support around an OpenAI-compatible API.
That still should not be framed as proof that OpenAI has published a public gpt-image-2 API model row. It proves a laozhang.ai provider contract: a callable provider route, provider billing, and provider support responsibility.
Before using the laozhang.ai route in production, record these items so your cost and support trail stays auditable:
| Check | Why it matters |
|---|---|
| Accepted model name | Tells you exactly which gpt-image-2 label the laozhang.ai route accepts |
| Endpoint shape | Confirms whether the request is Image API compatible, chat-completions style, or another provider-specific route |
| Billing unit | Separates per call, per image, token-based, credit-based, or quality/size-based pricing |
| Failure charging | Determines whether invalid requests, moderation blocks, timeouts, or provider failures cost money |
| Input and output limits | Prevents surprises around image size, edits, reference images, batch behavior, and response format |
| Support and logs | Determines who can help when requests fail or outputs drift |
That checklist is more useful than a yes/no label. A provider route can be valuable when it solves the buyer's actual access problem. It just needs to be evaluated as its own contract.
How to read laozhang.ai $0.03 per call

laozhang.ai $0.03/call is a provider price, not OpenAI's official GPT-Image-2 API price. It is still useful, but compare it only after the billing unit is named.
OpenAI's official image costs can include text input, image input, cached input, output tokens, and per-image output estimates. In the public image guide, a 1024 x 1024 GPT Image 1.5 output is listed around low $0.009, medium $0.034, and high $0.133, but that table is not the whole bill for every workflow. The total request can still depend on prompt length, image inputs, quality, size, and route.
The laozhang.ai route simplifies the buyer's experience into one flat call price. That can be easier for budgeting, but the production checklist still needs different questions:
- Is
$0.03charged per request, per successful image, per output, or per generated variant? - Does it include input image handling?
- Does it change by size, quality, region, queue, or model alias?
- Are failed or blocked calls charged?
- Is the quote public documentation, a dashboard price, a private quote, or a promotion?
Those details decide whether the number is useful. Treat laozhang.ai $0.03/call as a provider offer worth testing. Do not describe it as OpenAI's official GPT-Image-2 API price.
For the detailed official price mapping, use the sibling pricing guide: GPT-Image-2 API Pricing. That page keeps OpenAI's public pricing surfaces separate from cheaper third-party access pages.
Production checklist

Before shipping, make the route auditable. That means the application should know more than a model string.
Use separate configuration fields for route owner and model label:
| Config field | Example value | Why it exists |
|---|---|---|
image_provider | openai or laozhang | Keeps support and billing ownership visible |
image_route | image_api, responses_tool, or provider route name | Explains the request shape |
image_model | gpt-image-1.5 or provider-accepted label | Keeps the actual callable value separate |
billing_unit | token, image, call, credit, or quote | Prevents price comparisons from collapsing |
Then run one small billing test before real traffic. Save the request ID, endpoint, model label, image size, quality, output count, response time, and billed amount. If the provider route uses a flat call price, verify whether a failed request is charged. If the route is OpenAI direct, verify whether the cost matches the current official pricing layer you intended to use.
Finally, keep a stop rule in the release notes. If a public OpenAI gpt-image-2 model row appears later, do not silently swap every route. Run a controlled comparison first: same prompt, same size, same quality, same input image if applicable, and separate cost logs for old and new routes.
What would change the answer
The answer changes when first-party OpenAI material changes. A real public OpenAI API launch should leave at least one of these signals:
- a public model list or guide page that names
gpt-image-2 - an endpoint example that accepts
gpt-image-2 - a pricing row that names the model or route
- an official changelog, help note, or developer announcement that says the API route is public
Until then, GPT-Image-2 API is best treated as a market-visible route question. It is valid to track. It is not enough to justify unsupported production code.
If your question is specifically release timing, use GPT-Image-2 API Release Date. If your question is the official OpenAI cost baseline versus cheaper third-party pages, use GPT-Image-2 API Pricing.
FAQ
Can I call gpt-image-2 in the public OpenAI API today?
Do not assume so. As of April 21, 2026, no first-party public OpenAI gpt-image-2 API model row was verified. Use the current documented GPT Image routes unless your own OpenAI contract or official docs say otherwise.
What should I use for official OpenAI image generation?
Use the Image API for direct generation and edits. Use the Responses API image-generation tool when image generation is part of a larger assistant or multi-step workflow. The route choice comes before the model label.
Is laozhang.ai $0.03/call official OpenAI pricing?
No. It is laozhang.ai provider-route pricing for gpt-image-2 API access, not official OpenAI pricing. Record the accepted model name, endpoint, billing unit, and failure-charge rule before comparing it with OpenAI's official pricing.
When does laozhang.ai make sense?
It can make sense when you need an OpenAI-compatible provider route, access convenience, local payment or support help, or a flat $0.03/call route that fits your budget. It should not be used as proof of first-party OpenAI model availability.
What should a production app log?
Log the provider, route, model label, endpoint, request size, quality, output count, billed unit, failure status, and request ID. Without route ownership in logs, future model comparisons and cost reviews become unreliable.
Where should I check detailed pricing?
Use GPT-Image-2 API Pricing for the official OpenAI pricing baseline and third-party price interpretation. Use this route guide when the primary question is what to call and which contract to trust.
Bottom line
Choose the route before trusting the label. For first-party support, build on the current OpenAI GPT Image routes through the Image API or the Responses API image tool. For laozhang.ai, use the provider route when gpt-image-2 at $0.03/call solves the access, billing, or support job.
That boundary is enough to move forward today without pretending the public OpenAI model list says more than it does. It also keeps the future switch clean if OpenAI later publishes a public gpt-image-2 API row.
