Skip to main content

Nano Banana Pro API Guide in 2026: Official Vertex AI and AI Studio Paths

A
15 min readAPI Guide

If you want official Nano Banana Pro API access today, the model you want is gemini-3-pro-image-preview. Start with Google AI Studio and the Gemini Developer API when you want the fastest first request, and choose Vertex AI when you need IAM, governed billing, batch, or provisioned throughput.

Nano Banana Pro API Guide in 2026: Official Vertex AI and AI Studio Paths

If you want official Nano Banana Pro API access today, the model you want is gemini-3-pro-image-preview. There is no separate Nano Banana Pro developer portal to unlock. There are simply two first-party Google routes to the same model: Google AI Studio plus the Gemini Developer API if you want the fastest API-key start, and Vertex AI if you need Cloud IAM, governed billing, batch execution, or provisioned capacity.

That distinction matters because the real decision here is operational. You need to decide where to integrate the model, how to authenticate it, whether the route is actually paid, and whether AI Studio and Vertex AI solve the same job or two different ones. The clean answer is that they expose the same model under different operating contracts.

All model IDs, preview status, pricing rows, and billing rules below were rechecked against official Google documentation on April 1, 2026.

TL;DR

If this is your real jobStart hereWhyBiggest caveat
You want the fastest first request with the official Google stackAI Studio + Gemini Developer APIAPI key setup is fast, prompt iteration is easy, and the SDK path is shortNano Banana Pro still has no free API tier
You are building a team app inside Google CloudVertex AIBetter fit for IAM, service accounts, billing controls, auditability, and Cloud operationsSetup is heavier than the API-key route
You need batch jobs or provisioned capacityVertex AIThe Pro model page explicitly lists Batch and Provisioned Throughput support on VertexThis is an infrastructure choice, not just a UI preference
You want the best image quality but are not sure whether Pro is overkillStart with the official Pro route only if the workload really needs itPro is the premium image model for higher-fidelity assets, strong text rendering, and complex layoutsMany workloads are cheaper on other Gemini image models

The practical rule is simple: start in AI Studio when the main risk is not knowing how to get the first request working; choose Vertex AI when the main risk is operating the system at scale or under governance requirements.

What The Official Nano Banana Pro API Actually Is

The name invites the wrong expectation. Nano Banana Pro is the Gemini image model gemini-3-pro-image-preview, not a separate product portal. It is still marked Preview, and Google positions it as the high-end image route for demanding generation and editing jobs, including stronger text rendering, more complex layouts, and output up to 4K.

The important part is that the model identity stays the same even when the access surface changes. AI Studio does not give you a different Nano Banana Pro than Vertex AI does. Vertex AI does not turn the model into an enterprise-only variant. In both cases, the article's core target is still gemini-3-pro-image-preview.

Google's own Vertex AI model page also gives a nuance that many secondary posts miss: although the model is Preview, Google says customers may choose to use it for production or commercial purposes under the applicable Pre-GA terms. That is a much more useful framing than the usual false binary of "Preview means toy" versus "Preview means production-safe by default." The honest reading is: production is allowed, but the Pre-GA contract and preview-era change risk still apply.

If you want the broader family map after this article, including when Nano Banana 2 is the better default, read our Nano Banana AI image generation API guide and Nano Banana Pro vs Nano Banana 2 comparison. This piece is narrower. It answers the official Pro access question directly.

AI Studio Or Vertex AI: What Actually Changes

This is the operational choice that actually changes the outcome. Both routes can get you to the same model. What changes is not the core creative capability. What changes is the surrounding authentication, billing surface, operational model, and scale controls.

Comparison board showing when to choose AI Studio and when to choose Vertex AI for Nano Banana Pro

AI Studio plus the Gemini Developer API is the fastest official path when the job is straightforward integration. You create an API key in Google AI Studio, test prompts in Google's own interface if you want, then call the Gemini Developer API from code. This is the right default when you are an individual developer, a small team, or anyone trying to get from "I think I want Pro" to "I have a working request" with minimal ceremony.

Vertex AI is the better fit when Nano Banana Pro becomes part of a real Cloud workload instead of a quick integration. This is where IAM, project-level governance, application default credentials, batch processing, and provisioned throughput start to matter. Google's Vertex AI model page explicitly lists Standard PayGo, Flex PayGo, Batch prediction, and Provisioned Throughput for Gemini 3 Pro Image. That is the clearest operational reason to choose Vertex: not because the model becomes more "official," but because the surrounding contract becomes more production-friendly.

The easiest way to keep the distinction straight is:

  • choose AI Studio when the problem is developer velocity
  • choose Vertex AI when the problem is Cloud operations

That is why the surfaces are not redundant. They are different answers to different risks.

Route 1: AI Studio And The Gemini Developer API

For many readers, this is the correct starting point. Google AI Studio is where the Gemini Developer API key workflow lives. It is also the easiest place to validate prompts before you wire them into code. That does not mean AI Studio is "just a playground." It means the official API-key route and the official testing UI are intentionally close to each other.

The main caveat is billing. Google's current Gemini Developer API pricing page shows Free Tier: Not available for gemini-3-pro-image-preview. Google's billing FAQ also says that AI Studio remains free to use unless you link a paid API key for access to paid features, and once you do, usage on that key is charged. So the safe mental model is:

  • AI Studio as a product surface can be free to open and use
  • Nano Banana Pro API usage is still a paid contract
  • "I can see it in AI Studio" does not mean "I have a free Pro API tier"

When this route is right

Choose AI Studio and the Gemini Developer API when:

  • you want the shortest path to a first working request
  • API-key auth is acceptable for the workload
  • you are still iterating on prompts and output style
  • you do not yet need Cloud IAM, batch pipelines, or organization-level controls

Minimal JavaScript example

Install the current SDK:

bash
npm install @google/genai

Then send a request with your API key:

javascript
import { GoogleGenAI } from "@google/genai"; import fs from "node:fs"; const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); const response = await ai.models.generateContent({ model: "gemini-3-pro-image-preview", contents: "Create a clean product hero image for a mechanical keyboard on a dark studio background.", config: { responseModalities: ["IMAGE"], imageConfig: { aspectRatio: "16:9", imageSize: "2K", }, }, }); for (const part of response.candidates[0].content.parts) { if (part.inlineData) { fs.writeFileSync( "nano-banana-pro-output.png", Buffer.from(part.inlineData.data, "base64"), ); } }

This is the most direct official path: create the key in AI Studio, export GEMINI_API_KEY, and call generateContent. Google's image docs also note a small but easy-to-miss implementation detail: K must be uppercase in 1K, 2K, and 4K.

If you prefer raw HTTP first, the same route works with the Gemini Developer API endpoint and x-goog-api-key authentication. The only thing that changes is transport, not the contract.

Route 2: Vertex AI For Governed Production Use

Vertex AI is the better answer once the integration stops being "one developer with an API key" and starts being "a Cloud workload that other people must operate safely." That can happen earlier than many teams expect. The moment you need IAM, service accounts, project-level billing separation, batch work, or a path toward provisioned capacity, Vertex AI stops looking like overkill and starts looking like the correct home.

Authentication split showing API key for AI Studio and Cloud auth for Vertex AI

The important conceptual shift is authentication. With the Gemini Developer API route, your code is centered around an API key. With Vertex AI, your code is centered around Cloud auth. Google's current image-generation guide shows the GenAI SDK configured for Vertex with GOOGLE_CLOUD_PROJECT, GOOGLE_CLOUD_LOCATION, and GOOGLE_GENAI_USE_VERTEXAI=True, which is the cleanest sign that you are now in the Google Cloud operating model rather than the standalone API-key model.

When this route is right

Choose Vertex AI when:

  • the app belongs to a team instead of one developer
  • you need IAM, audit, and project governance
  • you want batch processing or provisioned throughput
  • service-account or ADC-based auth is a better fit than distributing API keys
  • Cloud billing and operational ownership matter as much as prompt quality

Minimal Node.js example on Vertex

Install the same SDK:

bash
npm install @google/genai

Set the environment variables shown in Google's Vertex docs:

bash
export GOOGLE_CLOUD_PROJECT=your-project-id export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True

Then call the same model through Vertex:

javascript
import fs from "node:fs"; import { GoogleGenAI, Modality } from "@google/genai"; const client = new GoogleGenAI({ vertexai: true, project: process.env.GOOGLE_CLOUD_PROJECT, location: process.env.GOOGLE_CLOUD_LOCATION || "global", }); const response = await client.models.generateContent({ model: "gemini-3-pro-image-preview", contents: "Create a premium launch poster for a smart watch, crisp typography, dark editorial lighting.", config: { responseModalities: [Modality.IMAGE], }, }); for (const part of response.candidates[0].content.parts) { if (part.inlineData) { fs.writeFileSync( "vertex-nano-banana-pro-output.png", Buffer.from(part.inlineData.data, "base64"), ); } }

Notice what did not change: the model string. Notice what did change: the client setup and the surrounding auth assumptions. That is the whole article in code form.

Pricing, Preview Status, And The Most Common Contract Mistake

Google's current pricing pages are unusually helpful here because they make the paid boundary explicit. On the Gemini Developer API pricing page, gemini-3-pro-image-preview has no free tier, with image output priced at \$0.134 per 1K/2K image and \$0.24 per 4K image. On the Vertex AI pricing page, the Standard output math lands on the same per-image equivalents, while Flex/Batch pricing cuts that to \$0.067 for 1K/2K and \$0.12 for 4K.

That means two useful things are true at once:

  • AI Studio is not the cheap route because it is "more basic."
  • Vertex AI is not automatically the expensive route just because it lives inside Cloud.

The bigger difference is operational. AI Studio optimizes for a fast API-key workflow. Vertex optimizes for Cloud-native operation and larger-scale workload patterns.

There is also a contract mistake that shows up constantly in this topic: people see that AI Studio itself can still be used without a bill in some contexts and then conclude that Nano Banana Pro API access must therefore be free. That is not what Google's docs say. The billing FAQ is more precise. AI Studio remains free to use unless you link a paid API key for access to paid features. Nano Banana Pro is one of the model paths where the actual API usage is paid.

Can You Start In AI Studio And Move To Vertex Later?

Yes, and for many teams that is the most rational progression. Start in AI Studio when the task is learning prompt behavior, validating output quality, and shipping the first thin integration. Move to Vertex AI when the architecture starts asking for Cloud-native controls rather than just a working endpoint.

Workflow diagram showing the first official Nano Banana Pro request from prompt to model to output image

The migration is easier than it sounds because the model identity does not change. You are not abandoning Nano Banana Pro for something else. You are changing the surrounding contract:

  • from API key auth to Cloud auth
  • from AI Studio project/key management to Cloud project/IAM operations
  • from quick integration defaults to more explicit operational control

That is why it is usually a mistake to start every experiment in Vertex AI, and it is also a mistake to assume you should stay on the AI Studio path forever. The right answer depends on where the operational weight lives today.

Choose In 30 Seconds

If you still want the shortest decision rule possible, use this one.

Start with AI Studio and the Gemini Developer API if your real question is:

  • "How do I get the first request working fast?"
  • "How do I validate prompts before building more infrastructure?"
  • "Can I use the official Google path without setting up Cloud operations first?"

Start with Vertex AI if your real question is:

  • "How do I make this fit our Google Cloud environment?"
  • "How do I give a team governed access instead of passing around API keys?"
  • "How do I plan for batch or provisioned capacity?"

And if your real question is actually whether Pro is the right model at all, do not force the route decision before the model decision. Read Nano Banana Pro vs Nano Banana 2 first. For many workloads, that is where the bigger cost and architecture win sits.

Share:

laozhang.ai

One API, All AI Models

AI Image

Gemini 3 Pro Image

$0.05/img
80% OFF
AI Video

Sora 2 · Veo 3.1

$0.15/video
Async API
AI Chat

GPT · Claude · Gemini

200+ models
Official Price
Served 100K+ developers
|@laozhang_cn|Get $0.1