If you want official Nano Banana Pro API access today, the model you want is gemini-3-pro-image-preview. There is no separate Nano Banana Pro developer portal to unlock. There are simply two first-party Google routes to the same model: Google AI Studio plus the Gemini Developer API if you want the fastest API-key start, and Vertex AI if you need Cloud IAM, governed billing, batch execution, or provisioned capacity.
That distinction matters because the real decision here is operational. You need to decide where to integrate the model, how to authenticate it, whether the route is actually paid, and whether AI Studio and Vertex AI solve the same job or two different ones. The clean answer is that they expose the same model under different operating contracts.
All model IDs, preview status, pricing rows, and billing rules below were rechecked against official Google documentation on April 1, 2026.
TL;DR
| If this is your real job | Start here | Why | Biggest caveat |
|---|---|---|---|
| You want the fastest first request with the official Google stack | AI Studio + Gemini Developer API | API key setup is fast, prompt iteration is easy, and the SDK path is short | Nano Banana Pro still has no free API tier |
| You are building a team app inside Google Cloud | Vertex AI | Better fit for IAM, service accounts, billing controls, auditability, and Cloud operations | Setup is heavier than the API-key route |
| You need batch jobs or provisioned capacity | Vertex AI | The Pro model page explicitly lists Batch and Provisioned Throughput support on Vertex | This is an infrastructure choice, not just a UI preference |
| You want the best image quality but are not sure whether Pro is overkill | Start with the official Pro route only if the workload really needs it | Pro is the premium image model for higher-fidelity assets, strong text rendering, and complex layouts | Many workloads are cheaper on other Gemini image models |
The practical rule is simple: start in AI Studio when the main risk is not knowing how to get the first request working; choose Vertex AI when the main risk is operating the system at scale or under governance requirements.
What The Official Nano Banana Pro API Actually Is
The name invites the wrong expectation. Nano Banana Pro is the Gemini image model gemini-3-pro-image-preview, not a separate product portal. It is still marked Preview, and Google positions it as the high-end image route for demanding generation and editing jobs, including stronger text rendering, more complex layouts, and output up to 4K.
The important part is that the model identity stays the same even when the access surface changes. AI Studio does not give you a different Nano Banana Pro than Vertex AI does. Vertex AI does not turn the model into an enterprise-only variant. In both cases, the article's core target is still gemini-3-pro-image-preview.
Google's own Vertex AI model page also gives a nuance that many secondary posts miss: although the model is Preview, Google says customers may choose to use it for production or commercial purposes under the applicable Pre-GA terms. That is a much more useful framing than the usual false binary of "Preview means toy" versus "Preview means production-safe by default." The honest reading is: production is allowed, but the Pre-GA contract and preview-era change risk still apply.
If you want the broader family map after this article, including when Nano Banana 2 is the better default, read our Nano Banana AI image generation API guide and Nano Banana Pro vs Nano Banana 2 comparison. This piece is narrower. It answers the official Pro access question directly.
AI Studio Or Vertex AI: What Actually Changes
This is the operational choice that actually changes the outcome. Both routes can get you to the same model. What changes is not the core creative capability. What changes is the surrounding authentication, billing surface, operational model, and scale controls.

AI Studio plus the Gemini Developer API is the fastest official path when the job is straightforward integration. You create an API key in Google AI Studio, test prompts in Google's own interface if you want, then call the Gemini Developer API from code. This is the right default when you are an individual developer, a small team, or anyone trying to get from "I think I want Pro" to "I have a working request" with minimal ceremony.
Vertex AI is the better fit when Nano Banana Pro becomes part of a real Cloud workload instead of a quick integration. This is where IAM, project-level governance, application default credentials, batch processing, and provisioned throughput start to matter. Google's Vertex AI model page explicitly lists Standard PayGo, Flex PayGo, Batch prediction, and Provisioned Throughput for Gemini 3 Pro Image. That is the clearest operational reason to choose Vertex: not because the model becomes more "official," but because the surrounding contract becomes more production-friendly.
The easiest way to keep the distinction straight is:
- choose AI Studio when the problem is developer velocity
- choose Vertex AI when the problem is Cloud operations
That is why the surfaces are not redundant. They are different answers to different risks.
Route 1: AI Studio And The Gemini Developer API
For many readers, this is the correct starting point. Google AI Studio is where the Gemini Developer API key workflow lives. It is also the easiest place to validate prompts before you wire them into code. That does not mean AI Studio is "just a playground." It means the official API-key route and the official testing UI are intentionally close to each other.
The main caveat is billing. Google's current Gemini Developer API pricing page shows Free Tier: Not available for gemini-3-pro-image-preview. Google's billing FAQ also says that AI Studio remains free to use unless you link a paid API key for access to paid features, and once you do, usage on that key is charged. So the safe mental model is:
- AI Studio as a product surface can be free to open and use
- Nano Banana Pro API usage is still a paid contract
- "I can see it in AI Studio" does not mean "I have a free Pro API tier"
When this route is right
Choose AI Studio and the Gemini Developer API when:
- you want the shortest path to a first working request
- API-key auth is acceptable for the workload
- you are still iterating on prompts and output style
- you do not yet need Cloud IAM, batch pipelines, or organization-level controls
Minimal JavaScript example
Install the current SDK:
bashnpm install @google/genai
Then send a request with your API key:
javascriptimport { GoogleGenAI } from "@google/genai"; import fs from "node:fs"; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); const response = await ai.models.generateContent({ model: "gemini-3-pro-image-preview", contents: "Create a clean product hero image for a mechanical keyboard on a dark studio background.", config: { responseModalities: ["IMAGE"], imageConfig: { aspectRatio: "16:9", imageSize: "2K", }, }, }); for (const part of response.candidates[0].content.parts) { if (part.inlineData) { fs.writeFileSync( "nano-banana-pro-output.png", Buffer.from(part.inlineData.data, "base64"), ); } }
This is the most direct official path: create the key in AI Studio, export GEMINI_API_KEY, and call generateContent. Google's image docs also note a small but easy-to-miss implementation detail: K must be uppercase in 1K, 2K, and 4K.
If you prefer raw HTTP first, the same route works with the Gemini Developer API endpoint and x-goog-api-key authentication. The only thing that changes is transport, not the contract.
Route 2: Vertex AI For Governed Production Use
Vertex AI is the better answer once the integration stops being "one developer with an API key" and starts being "a Cloud workload that other people must operate safely." That can happen earlier than many teams expect. The moment you need IAM, service accounts, project-level billing separation, batch work, or a path toward provisioned capacity, Vertex AI stops looking like overkill and starts looking like the correct home.

The important conceptual shift is authentication. With the Gemini Developer API route, your code is centered around an API key. With Vertex AI, your code is centered around Cloud auth. Google's current image-generation guide shows the GenAI SDK configured for Vertex with GOOGLE_CLOUD_PROJECT, GOOGLE_CLOUD_LOCATION, and GOOGLE_GENAI_USE_VERTEXAI=True, which is the cleanest sign that you are now in the Google Cloud operating model rather than the standalone API-key model.
When this route is right
Choose Vertex AI when:
- the app belongs to a team instead of one developer
- you need IAM, audit, and project governance
- you want batch processing or provisioned throughput
- service-account or ADC-based auth is a better fit than distributing API keys
- Cloud billing and operational ownership matter as much as prompt quality
Minimal Node.js example on Vertex
Install the same SDK:
bashnpm install @google/genai
Set the environment variables shown in Google's Vertex docs:
bashexport GOOGLE_CLOUD_PROJECT=your-project-id export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
Then call the same model through Vertex:
javascriptimport fs from "node:fs"; import { GoogleGenAI, Modality } from "@google/genai"; const client = new GoogleGenAI({ vertexai: true, project: process.env.GOOGLE_CLOUD_PROJECT, location: process.env.GOOGLE_CLOUD_LOCATION || "global", }); const response = await client.models.generateContent({ model: "gemini-3-pro-image-preview", contents: "Create a premium launch poster for a smart watch, crisp typography, dark editorial lighting.", config: { responseModalities: [Modality.IMAGE], }, }); for (const part of response.candidates[0].content.parts) { if (part.inlineData) { fs.writeFileSync( "vertex-nano-banana-pro-output.png", Buffer.from(part.inlineData.data, "base64"), ); } }
Notice what did not change: the model string. Notice what did change: the client setup and the surrounding auth assumptions. That is the whole article in code form.
Pricing, Preview Status, And The Most Common Contract Mistake
Google's current pricing pages are unusually helpful here because they make the paid boundary explicit. On the Gemini Developer API pricing page, gemini-3-pro-image-preview has no free tier, with image output priced at \$0.134 per 1K/2K image and \$0.24 per 4K image. On the Vertex AI pricing page, the Standard output math lands on the same per-image equivalents, while Flex/Batch pricing cuts that to \$0.067 for 1K/2K and \$0.12 for 4K.
That means two useful things are true at once:
- AI Studio is not the cheap route because it is "more basic."
- Vertex AI is not automatically the expensive route just because it lives inside Cloud.
The bigger difference is operational. AI Studio optimizes for a fast API-key workflow. Vertex optimizes for Cloud-native operation and larger-scale workload patterns.
There is also a contract mistake that shows up constantly in this topic: people see that AI Studio itself can still be used without a bill in some contexts and then conclude that Nano Banana Pro API access must therefore be free. That is not what Google's docs say. The billing FAQ is more precise. AI Studio remains free to use unless you link a paid API key for access to paid features. Nano Banana Pro is one of the model paths where the actual API usage is paid.
Can You Start In AI Studio And Move To Vertex Later?
Yes, and for many teams that is the most rational progression. Start in AI Studio when the task is learning prompt behavior, validating output quality, and shipping the first thin integration. Move to Vertex AI when the architecture starts asking for Cloud-native controls rather than just a working endpoint.

The migration is easier than it sounds because the model identity does not change. You are not abandoning Nano Banana Pro for something else. You are changing the surrounding contract:
- from API key auth to Cloud auth
- from AI Studio project/key management to Cloud project/IAM operations
- from quick integration defaults to more explicit operational control
That is why it is usually a mistake to start every experiment in Vertex AI, and it is also a mistake to assume you should stay on the AI Studio path forever. The right answer depends on where the operational weight lives today.
Choose In 30 Seconds
If you still want the shortest decision rule possible, use this one.
Start with AI Studio and the Gemini Developer API if your real question is:
- "How do I get the first request working fast?"
- "How do I validate prompts before building more infrastructure?"
- "Can I use the official Google path without setting up Cloud operations first?"
Start with Vertex AI if your real question is:
- "How do I make this fit our Google Cloud environment?"
- "How do I give a team governed access instead of passing around API keys?"
- "How do I plan for batch or provisioned capacity?"
And if your real question is actually whether Pro is the right model at all, do not force the route decision before the model decision. Read Nano Banana Pro vs Nano Banana 2 first. For many workloads, that is where the bigger cost and architecture win sits.
