• Tools
  • Pricing
  • Workflows
  • All Models
    Maker Mode
  • Gallery
  • Academy
  • Documentation
  • API
  • Status
  • Blog
Pixio Logo
Sign InSign Up
Pixio Logo
  • Tools
  • Pricing
  • Workflows
  • All Models
    Maker Mode
  • Gallery
  • Academy
  • Documentation
  • API
  • Status
  • Blog
Sign InSign Up
Pixio Logo

Visualize the Future: Crafted by AI, Inspired by You

© Copyright 2026 Pixio. All Rights Reserved.

Privacy PolicyTerms of ServiceRefund Policy
Video GenerationGen-4 Aleph (Video to Video)
Gen-4 Aleph (Video to Video)Pixio video systemBuilt for directed motion

Gen-4 Aleph (Video to Video)

Runway Gen-4 video-to-video: transform or restyle existing video—change look, style, or content while preserving motion and timing.

Pixio read

This model gets stronger as the shot becomes more explicit. Give it a subject, a move, a frame, and a mood so the output feels directed instead of guessed.

Open in PixioStudy the workflow

Best results start with a directed prompt or a strong first frame.

Why creators use it
Strong first frames win
Camera language matters
Built for short-form motion
Prompt
Direction-first input
Frame
Reference-ready control
Edit
Workflow behavior
Short-form
Production fit
Pixio briefing

How to get the best out of Gen-4 Aleph (Video to Video)

Prompt to Motion
Best when you want to direct the whole shot from language.
New scenes, camera intent, atmosphere-first ideation.
Reference Control
Best when the first frame or reference look needs to stay locked.
Keyframes, product shots, character continuity, style anchoring.
Video Edit
Best when the clip already works and you want more control instead of a reroll.
Continuations, polish passes, cleanup, stronger finals.
Basic Info

Gen-4 Aleph (Video to Video) on Pixio is Runway's video-to-video model: you input existing video and a text prompt (and optionally a reference image) to restyle or transform the clip. Change the look (e.g. cartoon, painting), add or remove objects, adjust lighting, or guide the mood—while preserving motion and timing. Use it when you have footage and want to change its style or content, not generate from a single still.

Gen-4 Aleph (Video to Video)

Gen-4 Aleph (Video to Video) on Pixio is Runway's video-to-video model: you input existing video and a text prompt (and optionally a reference image) to restyle or transform the clip. Change the look (e.g. cartoon, painting), add or remove objects, adjust lighting, or guide the mood—while preserving motion and timing. Use it when you have footage and want to change its style or content, not generate from a single still.

Use this when

  • You have existing video and want to change its style (e.g. oil painting, anime, sketch) with a text prompt or stylized reference frame.
  • You need to add, remove, or replace objects or change lighting while keeping the original motion.
  • You want Gen-4-level consistency—coherent characters, objects, and locations across the transformed clip.
  • You're building a post pipeline with Runway: generate with Gen-4 image-to-video, then restyle or edit with Aleph, then 4K upscale.

Modes in Pixio

ModeInputBest for
Video to VideoExisting video + prompt (± reference image)Restyle, content edit, lighting; motion preserved
Restyled first frameVideo + restyled first frame as referenceWhole clip follows the new look from frame one

Options

OptionValuesNotes
Duration~5s per run (typical max)Chain runs for longer sequences
CreditsHigher per second (e.g. ~15/sec in some plans)Check Pixio for current rates
ReferenceOptional imageStylized first frame or style reference

Credits

Credits depend on duration and plan; video-to-video typically costs more per second than image-to-video. Check the model card in Pixio for current rates.

Why video-to-video fits restyle and edit

Gen-4 Aleph (Video to Video) doesn't create video from a still—it and changes how it looks or what's in it. The prompt (and optional reference image) drives the transformation; . Use it for artistic restyles (cartoon, painting, etc.), object/lighting edits, or to align footage with a new look. Combine with Gen-4 image-to-video for new shots and Gen-4 Upscale for 4K.

Learn in the Academy

Step-by-step lessons, hands-on prompts, and a quiz to master Gen-4 Aleph (Video to Video).

Open course

Use in Pixio

Open Pixio Generate and try Gen-4 Aleph (Video to Video) right now.

Quick reads
Strong first frames win
Camera language matters
Built for short-form motion
Options and credits
Prompting
Directed shot language
Subject, action, camera, environment, lighting, style.
Iteration
Short passes first
Tighten rhythm before spending on finals.
Reference
Optional
Reference frames help when identity and composition must survive.
Practical playbook
Use these heuristics to get cleaner, more controllable outputs without wasting runs.
PreviousGen-4 Act-Two
NextGen-4 Turbo (Image to Video)
Prompt architecture
Build the output like a creative brief.
[Subject] + [Action] + [Camera Movement] + [Environment] + [Lighting] + [Style]
Prompt demo
A runner turns into a rain-soaked alley, camera tracking low beside them, reflected neon in the puddles, late-night city atmosphere, cinematic contrast, tense and propulsive pacing.

A strong video prompt gives the scene a subject, a move, camera behavior, and a mood to hold onto.

Modes and controls
Direct the whole scene
Prompt to Motion

Start from language and push for camera intent, pacing, atmosphere, and shot design in one move.

takes your clip
motion and timing stay the same

Prompt structure

  • Restyle: "Oil painting, visible brushstrokes, warm tones."
  • Content: "Remove the person in the background; keep the main subject."
  • Mood: "Darker lighting, film noir contrast."

Keep to one clear direction per run (style, or one content change).

When to use Gen-4 Aleph (Video to Video) vs other models

ScenarioBest choice
Restyle or edit existing videoGen-4 Aleph (Video to Video)
Generate new video from imageGen-4 (Image to Video) or Gen-4 Turbo
Cinema-grade from keyframeSeedance 2 Pro
4K upscaleGen-4 Upscale

Tips

  • Short input clips (~5s) per run work best; chain for longer edits.
  • Restyled first frame as reference keeps the whole clip visually consistent.
  • One edit type per prompt (style only, or one content change).
  • Pair with Gen-4 Upscale for 4K delivery after restyle.
Open Generate
1

Start with a strong first frame when consistency matters more than surprise.

2

Keep each prompt focused on one primary motion direction.

3

Use shorter runs for iteration, then scale up for finals.

4

For narratives, structure the idea as Shot 1 / Shot 2 / Shot 3 instead of one flat blob.

Lock the look first
Reference Motion

Start from a frame or reference when consistency matters more than improvisation.

Keep the motion usable
Video Edit

Continue or refine the clip without throwing away the visual language you already established.

Prompt
Direction-first input
Frame
Reference-ready control
Edit
Workflow behavior
Short-form
Production fit
Best use cases
1

Gen-4 Aleph (Video to Video) works well when the prompt needs motion, framing, and visual direction, not just subject matter.

2

Use it for sequences that need a strong first frame, continuity, or a clearly controlled camera idea.

3

Treat each generation like a shot brief instead of a loose caption to get more cinematic outputs.

Pixio workflow
Step 01
Anchor the shot

Start with either a directed text brief or a strong frame, depending on how locked the look already is.

Step 02
Direct the move

Write the motion like a director: subject, action, camera behavior, environment, lighting, and tone.

Step 03
Scale to finals

Iterate fast on shorter runs, then move to stronger finals once the rhythm feels right.

Best paired with
Nano Banana Pro

Use it to build a stronger first frame, then hand that frame to the video model for motion and continuity.

Pixio utilities

Pair it with frame extraction, merge tools, or image prep so the motion workflow stays clean end to end.