Skip to main content

How to Use Seedance 2.0 API in 2026: What Actually Works Today

A
15 min readAI Video Generation

As of March 27, 2026, the official Seedance 2.0 API is still not generally available. This guide shows what Volcengine and fal.ai actually support today, how to use the working Ark video API with Seedance 1.5 Pro, and how to structure your code so switching to Seedance 2.0 later is a model-ID change instead of a rewrite.

How to Use Seedance 2.0 API in 2026: What Actually Works Today

If you search for "Seedance 2.0 API" today, you will find a lot of pages that sound more certain than the official documentation actually is. The important distinction is simple: Seedance 2.0 the model is real and available for manual testing, but the official Seedance 2.0 API is still not generally open as of March 27, 2026. Volcengine's own create-task API page says Seedance 2.0 currently supports only the Ark experience center's free-quota testing and "暂不支持 API 调用" ("does not yet support API calls"). At the same time, Volcengine's video-generation API itself is live, and the current officially callable models are still the Seedance 1.5 and 1.0 family. That means the practical question is not "Which official Seedance 2.0 endpoint do I call right now?" but "How do I build a migration-safe video generation integration now without wiring my product to assumptions the vendor has not published?"

This guide answers that directly. It shows what the official sources currently say, what path works today if you need code, and how to prepare for Seedance 2.0 GA without rewriting your queueing, polling, or storage logic later.

TL;DR

  • As of March 27, 2026, the official Volcengine create-task doc explicitly says Seedance 2.0 is not yet available for API calls and is limited to the Ark experience center's free quota.
  • Volcengine's Ark video generation API is live, but the callable official model shown in current docs and Volcengine's March community guidance is doubao-seedance-1-5-pro-251215, not a public Seedance 2.0 API model.
  • fal.ai's own Seedance 2.0 page still says "Coming soon."
  • If you need a production-shaped integration today, the safest official path is to build your submit-poll-download flow against Ark + Seedance 1.5 Pro, keep model configurable, and swap only after Volcengine publishes a real 2.0 API contract.
  • If a third-party provider claims Seedance 2.0 access already, verify the exact model ID, native audio behavior, retention window, failure billing, and commercial terms before you depend on it.

Route matrix for Seedance 2.0 API: official console-only testing, live Ark video API for Seedance 1.5/1.0, fal.ai coming soon, and provider claims that require verification

What the official sources actually say right now

The cleanest way to think about the current situation is to separate model availability from API availability.

The official ByteDance Seed model page describes Seedance 2.0 as a multimodal audio-video generation model that accepts text, image, audio, and video inputs. That confirms the product and its capability direction. But the official Volcengine create-task API doc adds the detail that matters for developers: Seedance 2.0 is not yet callable through the API. The official Volcengine virtual avatar library doc repeats the same limitation even while describing Seedance 2.0 testing inside the Ark experience center and showing that the console can expose template code samples. In other words, the API surface exists, the model exists, but the official public contract for calling Seedance 2.0 through that API is still gated.

Volcengine's own March 2026 developer-community article about OpenClaw and Seedance is useful here because it lists the currently supported video model IDs for real integrations and says Seedance 2.0 support will arrive after API GA. It names doubao-seedance-1-5-pro-251215 as the default current model and explicitly points people to the Ark experience center for Seedance 2.0 experimentation in the meantime. That is not the same thing as a formal API spec, but it lines up with the formal API docs rather than contradicting them.

fal.ai tells a similar story. Its official Seedance 2.0 page currently says "Coming soon" and explains that API access will be available once the launch happens. So even one of the most credible model-serving platforms is still positioning Seedance 2.0 as not yet live for general API use.

That gives us a much better answer than the stale "the API is delayed but basically available everywhere" framing you still see in search results. The official evidence says:

RouteCurrent state on March 27, 2026What it is good for
Ark experience centerSeedance 2.0 is available for manual testing inside the consolePrompt testing, asset testing, template exploration
Official Ark video APILive, but currently documented around Seedance 1.5 and 1.0 model IDsProduction-shaped backend integration today
fal.ai Seedance 2.0Coming soonSomething to monitor, not something to hard-depend on yet
Third-party "Seedance 2.0 API" claimsProvider-specific and unevenPossible fast access, but only after careful verification

The practical gap in current search results is that many pages talk about Seedance 2.0 features and then jump straight into generic API examples without first answering whether the official API is actually open. For this query, that missing reality check is the most important part of the tutorial.

The safest working path today: Volcengine Ark video API with Seedance 1.5 Pro

If you need working code now, the most defensible official route is not to invent an unofficial seedance-2.0 integration. It is to use the current Ark video-generation API with the latest officially callable Seedance model, which today is Seedance 1.5 Pro.

That path matters because it lets you solve the hard engineering parts now: authentication, async job submission, status polling or callbacks, result persistence, expiry handling, and retry behavior. Those are the parts most teams actually need to productionize. When Volcengine eventually opens official Seedance 2.0 API access, the safest possible migration is for your code to swap a model ID and then add any newly documented input modes or parameters, instead of forcing a full rewrite of your job pipeline.

The current official integration shape looks like this:

ItemCurrent official value
Submit endpointPOST https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks
Query endpointGET https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks/{id}
AuthAuthorization: Bearer $ARK_API_KEY
Current callable official modeldoubao-seedance-1-5-pro-251215
Current official 2.0 statusExperience center only, no public API calls yet
Common inputscontent array with text, image_url, or draft_task objects
Useful parametersresolution, ratio, duration, generate_audio, return_last_frame, callback_url, service_tier, execution_expires_after
Query statusesqueued, running, succeeded, failed, expired, cancelled
Result retentionvideo_url is cleared after 24 hours; query history covers the last 7 days

Two details from the official docs deserve extra attention.

First, Ark's video API is asynchronous by design. You submit a task, receive an id, then either poll or handle a callback. This is not a minor implementation detail. It affects queueing, timeouts, background workers, user messaging, and how you manage temporary result URLs.

Second, the docs already expose production-friendly controls that many generic tutorials ignore. callback_url lets you avoid constant polling, return_last_frame lets you chain adjacent clips, service_tier lets you express latency-versus-cost preferences, and execution_expires_after gives you a way to make timeout handling explicit instead of accidental.

Step 1: Submit a video generation task

The first request creates the job and returns a task ID. This example uses the currently supported official Ark model ID rather than pretending an official Seedance 2.0 API model is already open.

bash
export ARK_API_KEY="your-ark-api-key" curl -X POST "https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${ARK_API_KEY}" \ -d '{ "model": "doubao-seedance-1-5-pro-251215", "content": [ { "type": "text", "text": "A cinematic tracking shot of a woman in a red coat crossing a rainy night street, neon reflections on the pavement, natural ambient city sound, realistic motion." } ], "resolution": "720p", "ratio": "16:9", "duration": 5, "generate_audio": true, "return_last_frame": true, "watermark": false }'

The response is small by design. A typical success case returns a JSON object with only the task ID:

json
{ "id": "cgt-2026xxxx-xxxx" }

The model-specific input rules matter here. For example, Seedance 1.5 Pro supports audio generation, while some older Seedance variants do not. The create-task doc also distinguishes text-only, first-frame image-to-video, first-and-last-frame image-to-video, reference-image flows, and draft-task promotion to final video. If you are building an internal abstraction layer, those capabilities should live in configuration, not hardcoded assumptions.

Step 2: Query task status until it finishes

Once you have the task ID, query the task endpoint until the status reaches succeeded, failed, expired, or cancelled.

bash
curl -X GET "https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks/cgt-2026xxxx-xxxx" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer ${ARK_API_KEY}"

When the job succeeds, the response includes the final output URL plus the generation metadata that is useful for logging, billing inspection, or debugging:

json
{ "id": "cgt-2026xxxx-xxxx", "model": "doubao-seedance-1-5-pro-251215", "status": "succeeded", "content": { "video_url": "https://ark-content-generation-cn-beijing.tos-cn-beijing.volces.com/..." }, "usage": { "completion_tokens": 108900, "total_tokens": 108900 }, "resolution": "720p", "ratio": "16:9", "duration": 5, "framespersecond": 24, "generate_audio": true, "draft": false }

The official query-task doc says two operational details that are easy to miss when people copy minimal examples:

  • The generated video_url is only retained for 24 hours, so download or re-store it promptly.
  • The query endpoint only covers the most recent 7 days of task history.

Those two rules should directly shape your backend. If you are building user-facing video generation, do not treat the vendor URL as permanent storage.

Migration-safe Ark integration flow: submit a job, poll or receive callbacks, store the video before URL expiry, and keep the model ID configurable for a future Seedance 2.0 swap

Python example: small, production-shaped client

The Python example below implements the official Ark lifecycle with a configurable model ID. That single design choice is what makes this code useful for a future Seedance 2.0 GA instead of locking you into today's latest callable model forever.

python
import os import time import requests ARK_BASE = "https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks" class ArkVideoClient: def __init__(self, api_key: str, model: str = "doubao-seedance-1-5-pro-251215"): self.model = model self.session = requests.Session() self.session.headers.update( { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", } ) def submit(self, prompt: str) -> str: payload = { "model": self.model, "content": [{"type": "text", "text": prompt}], "resolution": "720p", "ratio": "16:9", "duration": 5, "generate_audio": True, "return_last_frame": True, "watermark": False, } resp = self.session.post(ARK_BASE, json=payload, timeout=60) resp.raise_for_status() return resp.json()["id"] def wait(self, task_id: str, poll_interval: int = 4, timeout: int = 300) -> dict: deadline = time.time() + timeout while time.time() < deadline: resp = self.session.get(f"{ARK_BASE}/{task_id}", timeout=30) resp.raise_for_status() task = resp.json() status = task["status"] if status == "succeeded": return task if status in {"failed", "expired", "cancelled"}: raise RuntimeError(task.get("error", {}).get("message", status)) time.sleep(poll_interval) raise TimeoutError(f"Task {task_id} did not finish within {timeout}s") if __name__ == "__main__": client = ArkVideoClient(api_key=os.environ["ARK_API_KEY"]) task_id = client.submit( "Realistic rainy alley at night, camera tracks forward, footsteps and city ambience." ) result = client.wait(task_id) print(result["content"]["video_url"])

The key point is not the syntax. It is the separation of concerns. submit() knows how to create the job. wait() knows how to interpret Ark's status contract. If Seedance 2.0 later becomes available under the same async pattern, the migration surface becomes much smaller.

Node.js example: the same Ark flow with fetch

If your stack is JavaScript or TypeScript, keep the exact same shape: configurable model, submit once, then poll until terminal state.

ts
const BASE_URL = "https://ark.cn-beijing.volces.com/api/v3/contents/generations/tasks"; async function submitVideo( apiKey: string, prompt: string, model = "doubao-seedance-1-5-pro-251215" ) { const response = await fetch(BASE_URL, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify({ model, content: [{ type: "text", text: prompt }], resolution: "720p", ratio: "16:9", duration: 5, generate_audio: true, return_last_frame: true, watermark: false, }), }); if (!response.ok) { throw new Error(`Submit failed: ${response.status} ${await response.text()}`); } const task = await response.json(); return task.id as string; } async function waitForVideo(apiKey: string, taskId: string, timeoutMs = 300000) { const deadline = Date.now() + timeoutMs; while (Date.now() < deadline) { const response = await fetch(`${BASE_URL}/${taskId}`, { headers: { Authorization: `Bearer ${apiKey}` }, }); if (!response.ok) { throw new Error(`Status check failed: ${response.status} ${await response.text()}`); } const task = await response.json(); if (task.status === "succeeded") return task; if (["failed", "expired", "cancelled"].includes(task.status)) { throw new Error(task.error?.message ?? task.status); } await new Promise((resolve) => setTimeout(resolve, 4000)); } throw new Error(`Timed out waiting for ${taskId}`); }

This pattern is intentionally boring. That is a feature, not a flaw. When official Seedance 2.0 finally gets a public contract, boring code migrates better than clever code.

How to prepare for official Seedance 2.0 GA without rewriting later

The most useful thing you can do now is not to guess at hidden endpoints. It is to make your current integration tolerant of model churn.

Keep the model ID in configuration, not in business logic. The biggest future difference may be as small as swapping from doubao-seedance-1-5-pro-251215 to a documented 2.0 model ID, or as large as adding extra content modalities. Either way, hardcoding the model name in five places only makes launch day harder.

Separate job orchestration from model capability flags. Your queue, callback handler, timeout rules, storage pipeline, and user notifications should not need to know whether the active model supports audio, draft mode, last-frame return, or reference-image counts. Those belong in a capability map or provider config.

Use the Ark experience center to test prompt logic and asset workflows now. The official virtual avatar library page is helpful precisely because it shows Seedance 2.0 already works in the console for template experimentation, asset IDs, and quick manual testing. That lets you validate creative direction even though the official API contract is still closed.

Treat "same endpoint later" as an inference, not a guarantee. Based on the current Ark docs, it is reasonable to infer that the async submit-query lifecycle is more likely to stay stable than the model-specific payload rules. But that is still an inference from today's docs, not an official migration promise. When 2.0 GA arrives, re-check the create-task page before you flip traffic.

How to vet third-party "Seedance 2.0 API" claims

This is where most current search coverage is weakest. Search results are crowded with wrapper pages, affiliate roundups, and providers that mix Seedance 2.0 marketing language with code examples that actually look like generic video jobs. If you really do need third-party access before official GA, the right question is not "Which listicle ranks first?" It is "What evidence shows this provider is exposing the model and behavior I think I am buying?"

Use this checklist before you integrate:

Verify this firstWhy it matters
Exact model ID or route nameDistinguishes real 2.0 access from recycled 1.5 or generic video endpoints
Whether output includes native audioSeedance 2.0's headline capability is audio-video joint generation; silent output is not the same product behavior
Result retention windowOfficial Ark URLs expire after 24 hours; many providers also use short-lived result links
Failure billing policyAsync video generation is expensive enough that "you pay on failed jobs" changes integration economics fast
Callback or polling contractYour backend needs terminal states, retry rules, and a stable schema
Commercial and likeness termsEspecially important if a provider markets character references, voice sync, or real-person workflows

The safest pattern is to test any provider with a short prompt suite before you commit: one text-to-video prompt, one image-conditioned prompt if supported, one audio-sensitive prompt if the provider claims native audio, and one failure case to inspect error behavior. A provider that cannot answer those questions cleanly is not yet a production dependency, even if its landing page says "Seedance 2.0 API."

Provider verification matrix for Seedance 2.0 claims: check model ID, audio behavior, retention, billing, and licensing before depending on any unofficial API route

FAQ

Is the official Seedance 2.0 API available today?

Not as a generally documented public API. As of March 27, 2026, Volcengine's official create-task doc says Seedance 2.0 currently supports only Ark experience-center testing within the free quota and does not yet support API calls.

Can I still use Seedance 2.0 anywhere officially?

Yes, but officially that means the Ark experience center, not a public 2.0 API contract. The experience center is useful for testing prompts, templates, asset references, and manual workflows while you wait for GA.

What should I build against if I need code now?

Build against the official Ark video-generation API using the currently supported Seedance 1.5 Pro model ID. That gives you a real async video pipeline now and keeps your migration surface smaller later.

Does the official Ark video API already support audio?

Yes, for Seedance 1.5 Pro. The current docs expose generate_audio for that model and the query API returns whether audio was generated.

Should I wait for Seedance 2.0 GA before doing any backend work?

Only if your product depends specifically on 2.0-only behavior and you cannot ship with today's official models. Otherwise, the job orchestration, storage, retry, callback, and observability work can all be done now against Ark's existing official video API.

What should I monitor for the real launch?

Monitor the official Volcengine create-task API doc, the virtual avatar library / experience-center docs, Volcengine's official developer announcements, and the fal.ai Seedance 2.0 page. The signal you care about is not marketing copy. It is a documented callable model ID and a stable request schema.

Share:

laozhang.ai

One API, All AI Models

AI Image

Gemini 3 Pro Image

$0.05/img
80% OFF
AI Video

Sora 2 · Veo 3.1

$0.15/video
Async API
AI Chat

GPT · Claude · Gemini

200+ models
Official Price
Served 100K+ developers
|@laozhang_cn|Get $0.1