• Tools
  • Pricing
  • Workflows
  • All Models
    Maker Mode
  • Gallery
  • Academy
  • Documentation
  • API
  • Status
  • Blog
Pixio Logo
Sign InSign Up
Pixio Logo
  • Tools
  • Pricing
  • Workflows
  • All Models
    Maker Mode
  • Gallery
  • Academy
  • Documentation
  • API
  • Status
  • Blog
Sign InSign Up
Pixio Logo

Visualize the Future: Crafted by AI, Inspired by You

© Copyright 2026 Pixio. All Rights Reserved.

Privacy PolicyTerms of ServiceRefund Policy
Video GenerationGen-4 Act-Two
Gen-4 Act-TwoPixio video systemBuilt for directed motion

Gen-4 Act-Two

Character-driven Runway video: drive a character with a reference image and motion direction for consistent, animated characters.

Pixio read

This model gets stronger as the shot becomes more explicit. Give it a subject, a move, a frame, and a mood so the output feels directed instead of guessed.

Open in PixioStudy the workflow

Best results start with a directed prompt or a strong first frame.

Why creators use it
Strong first frames win
Camera language matters
Built for short-form motion
Prompt
Direction-first input
Image
Reference-ready control
Motion
Workflow behavior
Short-form
Production fit
Pixio briefing

How to get the best out of Gen-4 Act-Two

Prompt to Motion
Best when you want to direct the whole shot from language.
New scenes, camera intent, atmosphere-first ideation.
Image to Video
Best when the first frame or reference look needs to stay locked.
Keyframes, product shots, character continuity, style anchoring.
Scale to Finals
Best when the clip already works and you want more control instead of a reroll.
Continuations, polish passes, cleanup, stronger finals.
Basic Info

Gen-4 Act-Two on Pixio is Runway’s character-driven video model. You provide a reference image of a character (or person) and a text prompt that describes how they should move or act; the model generates video that keeps the character consistent across the clip. Use it when you need a specific character or spokesperson to perform an action or deliver a scene—talking, gesturing, or moving—without character drift.

Gen-4 Act-Two

Gen-4 Act-Two on Pixio is Runway’s character-driven video model. You provide a reference image of a character (or person) and a text prompt that describes how they should move or act; the model generates video that keeps the character consistent across the clip. Use it when you need a specific character or spokesperson to perform an action or deliver a scene—talking, gesturing, or moving—without character drift.

Use this when

  • You have a character reference (photo, illustration, or design) and need them to perform in video—talking, gesturing, walking, or acting.
  • You want character consistency—same face, look, and proportions across the generated clip.
  • You need motion and expression driven by a text prompt (e.g. “waves at camera”, “explains product with hand gestures”).
  • You’re building spokesperson, avatar, or character animation content without full lip-sync or voice (pair with Act-One or voice tools for speech).

Modes in Pixio

ModeInputBest for
Character to VideoOne character reference image + promptCharacter performs the described action; consistency from reference

Options

OptionValuesNotes
ReferenceOne image (character/person)Clear face and body; front or three-quarter view works best
DurationDepends on backendCheck Pixio for limits
PromptAction, expression, cameraDescribe what the character does, not their appearance

Credits

Credits depend on duration and plan; check the model card in Pixio for current rates.

Why Act-Two fits character-driven video

Act-Two is built for one character in, one character out: the reference image defines who we see, and the prompt defines what they do. The model keeps the character’s look consistent while animating motion and expression. Use it for spokesperson clips, character moments, or when you need a specific person/character to perform an action. For , combine with Runway or other voice/lip-sync tools.

Learn in the Academy

Step-by-step lessons, hands-on prompts, and a quiz to master Gen-4 Act-Two.

Open course

Use in Pixio

Open Pixio Generate and try Gen-4 Act-Two right now.

Quick reads
Strong first frames win
Camera language matters
Built for short-form motion
Options and credits
Prompting
Directed shot language
Subject, action, camera, environment, lighting, style.
Iteration
Short passes first
Tighten rhythm before spending on finals.
Reference
Use when needed
Reference frames help when identity and composition must survive.
Practical playbook
Use these heuristics to get cleaner, more controllable outputs without wasting runs.
PreviousGen-4 (Image to Video)
NextGen-4 Aleph (Video to Video)
Prompt architecture
Build the output like a creative brief.
[Subject] + [Action] + [Camera Movement] + [Environment] + [Lighting] + [Style]
Prompt demo
A runner turns into a rain-soaked alley, camera tracking low beside them, reflected neon in the puddles, late-night city atmosphere, cinematic contrast, tense and propulsive pacing.

A strong video prompt gives the scene a subject, a move, camera behavior, and a mood to hold onto.

Modes and controls
Direct the whole scene
Prompt to Motion

Start from language and push for camera intent, pacing, atmosphere, and shot design in one move.

talking head with lip-sync and voice
Act-One

Prompt structure

Describe the character’s action and expression, not their look. The reference image defines appearance.

  • "Waves at camera with a friendly smile."
  • "Nods thoughtfully, then gestures toward the product."
  • "Walks slowly toward camera, neutral expression."

Keep one clear action per prompt.

When to use Gen-4 Act-Two vs other models

ScenarioBest choice
Character-driven clip from one referenceGen-4 Act-Two
Talking head + lip-sync + voiceFabric, Character 3, OmniHuman, or Act-One + voice
General image-to-video (no character focus)Gen-4 (Image to Video) or Seedance 2 Pro
Restyle existing videoGen-4 Aleph

Tips

  • Use a clear reference—face and body visible, good lighting, front or three-quarter view.
  • Prompt = action and expression; avoid re-describing the character’s look.
  • One action per clip for best consistency.
  • Combine with Act-One or voice tools when you need speech and lip-sync.
Open Generate
1

Start with a strong first frame when consistency matters more than surprise.

2

Keep each prompt focused on one primary motion direction.

3

Use shorter runs for iteration, then scale up for finals.

4

For narratives, structure the idea as Shot 1 / Shot 2 / Shot 3 instead of one flat blob.

Lock the look first
Image to Video

Start from a frame or reference when consistency matters more than improvisation.

Keep the motion usable
Final Pass

Continue or refine the clip without throwing away the visual language you already established.

Prompt
Direction-first input
Image
Reference-ready control
Motion
Workflow behavior
Short-form
Production fit
Best use cases
1

Gen-4 Act-Two works well when the prompt needs motion, framing, and visual direction, not just subject matter.

2

Use it for sequences that need a strong first frame, continuity, or a clearly controlled camera idea.

3

Treat each generation like a shot brief instead of a loose caption to get more cinematic outputs.

Pixio workflow
Step 01
Anchor the shot

Start with either a directed text brief or a strong frame, depending on how locked the look already is.

Step 02
Direct the move

Write the motion like a director: subject, action, camera behavior, environment, lighting, and tone.

Step 03
Scale to finals

Iterate fast on shorter runs, then move to stronger finals once the rhythm feels right.

Best paired with
Nano Banana Pro

Use it to build a stronger first frame, then hand that frame to the video model for motion and continuity.

Pixio utilities

Pair it with frame extraction, merge tools, or image prep so the motion workflow stays clean end to end.