SDL Scripting
The Scene Description Language (SDL) is a JSON format for defining video templates programmatically. Every movie template in ImpossibleFX — whether built in the visual editor or via code — is internally represented as an SDL document. By writing SDL directly, you can generate templates dynamically, version-control them as code, and automate template creation at scale.
SDL is a JSON serialization of Protocol Buffer messages. The root object is a Movie, which contains encoding settings, an array of scenes, and optional audio. Each scene contains visual tracks (layers) that are composited together, and each track has a content source (an image provider) and optional transformations.
SDL scripting requires familiarity with dynamic movie templates. If you’re new to ImpossibleFX, start with the Quickstart and Rendering Videos guides first.
When to use SDL
- Generating templates dynamically — Create template variations based on data, such as different layouts for different product categories or locales.
- Extending existing templates — Download a template built in the visual editor, add layers, modify animations, or inject logic, then re-upload it.
- Version control — Store templates as JSON files in your repository alongside your application code.
- Batch template creation — Generate hundreds of template variations from a single script.
Movie structure
An SDL document is a single Movie object. Here is the hierarchy:
Movie
├── params (StreamParams)
│ ├── vparams (VideoParams) — width, height, codec, bitrate, framerate
│ └── aparams (AudioParams) — codec, bitrate, sample rate
├── scenes[] (Scene)
│ ├── tracks[] (VisualTrack)
│ │ ├── content (ImageProvider) — the visual source
│ │ └── transformations[] (Transformation) — effects applied to content
│ └── audio (Audio)
│ └── audiotracks[] (AudioTrack)
└── audio (Audio) — movie-level audio track
Minimal example
A movie with a single scene showing a solid color background for 3 seconds at 25 fps:
{
"params": {
"vparams": {
"width": 1920,
"height": 1080,
"videocodec": "VIDEO_X264",
"videobitrate": 4000,
"videoframerate": { "num": 25, "den": 1 }
}
},
"scenes": [
{
"numframes": 75,
"tracks": [
{
"content": {
"type": "emptyimage",
"width": 1920,
"height": 1080,
"color": { "red": 20, "green": 20, "blue": 30 }
}
}
]
}
]
}
Stream parameters
StreamParams defines the encoding settings for the rendered video. It contains vparams (video) and optionally aparams (audio).
Video parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
width | Integer | Yes | — | Width of the rendered video in pixels. |
height | Integer | Yes | — | Height of the rendered video in pixels. |
videocodec | String | No | "VIDEO_X264" | Video codec. Common values: VIDEO_X264, VIDEO_VP8, VIDEO_GIF, VIDEO_PRORES. |
videobitrate | Integer | No | 2000 | Video bitrate in kilobits per second. |
videoframerate | Fractional | No | 30/1 | Frame rate as a fraction (e.g. 25/1 for 25fps, 30000/1001 for 29.97fps). |
videogopsize | Integer | No | 30 | Number of frames in a group of pictures. |
videorc | String | No | "VRC_BITRATE" | Rate control method: VRC_BITRATE, VRC_QUANTIZER, or VRC_RATEFACTOR. |
videocpueffort | Number | No | 10.0 | Speed/quality tradeoff (0.0 = fastest, 50.0 = balanced, 90.0 = best quality). |
Audio parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
audiocodec | String | No | "AUDIO_NONE" | Audio codec: AUDIO_AAC, AUDIO_MP3, AUDIO_VORBIS, AUDIO_PCM, or AUDIO_NONE. |
audioabr | Integer | No | 80000 | Audio bitrate in kilobits per second. |
audiosamplerate | Integer | No | 44100 | Audio sample rate in Hz. |
audiochannels | Integer | No | 2 | Number of audio channels (1 = mono, 2 = stereo). |
{
"params": {
"vparams": {
"width": 1920,
"height": 1080,
"videocodec": "VIDEO_X264",
"videobitrate": 4000,
"videoframerate": { "num": 25, "den": 1 }
},
"aparams": {
"audiocodec": "AUDIO_AAC",
"audioabr": 128000,
"audiosamplerate": 44100
}
}
}
Scenes
Scenes are sequential segments of a movie. The total movie length is the sum of all scene durations. Each scene contains visual tracks (layers) composited on top of each other, and optionally scene-specific audio.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
numframes | Integer | No | — | Number of frames in this scene. If omitted, the scene length is determined from its content (e.g., the duration of a video track). |
tracks | VisualTrack[] | No | — | Array of visual layers composited in order (first track is the bottom layer). |
audio | Audio | No | — | Scene-specific audio. |
editoverlap | Integer | No | — | Number of frames to overlap with the previous scene for transitions (typically a negative value, e.g. -25). |
type | String | No | "normal" | Scene type: normal, scenebased (defers to a SceneView), or reference (references a scene definition). |
enablevariable | StringVariable | No | — | Dynamically enable or disable this scene based on a variable value. |
A movie with two scenes and a 1-second crossfade transition:
{
"scenes": [
{
"numframes": 75,
"tracks": [{ "content": { "type": "video", "source": { "path": "intro.mp4" } } }]
},
{
"numframes": 150,
"editoverlap": -25,
"tracks": [{ "content": { "type": "video", "source": { "path": "main.mp4" } } }]
}
]
}
Visual tracks (layers)
VisualTracks form the compositing stack within a scene. Each track has optional content (an ImageProvider), positioning, opacity, blend mode, and an array of transformations. Tracks are composited in order — the first track is the bottom layer.
If a track has no content, its transformations are applied directly to the composite of all previous tracks (an “adjustment track” pattern, similar to adjustment layers in Photoshop).
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
content | ImageProvider | No | — | The visual source for this track. |
numframes | Integer | No | -1 | Duration of this track in frames. -1 means it lasts for the entire scene. |
offset | Integer | No | 0 | Start offset in frames from the beginning of the scene. |
x | Integer | No | 0 | Horizontal position offset for compositing. |
y | Integer | No | 0 | Vertical position offset for compositing. |
centerx | Boolean | No | false | Center the content horizontally. |
centery | Boolean | No | false | Center the content vertically. |
opacity | Number | No | 1.0 | Opacity for compositing (0.0 = transparent, 1.0 = opaque). |
opacityfunction | Function | No | — | Animate opacity over time. |
blendmode | BlendMode | No | "normal" | Blend mode: normal, screen, softlight, hardlight, overlay, multiply, lineardodge, linearburn, subtract, difference, lighten, darken. |
transformations | Transformation[] | No | — | Array of transformations applied to the content (or to the background buffer if no content is set). |
enablevariable | StringVariable | No | — | Dynamically enable or disable this track. |
Example — a background video with a centered logo on top at 50% opacity:
{
"tracks": [
{
"content": {
"type": "video",
"source": { "path": "background.mp4" }
}
},
{
"content": {
"type": "stillimage",
"source": { "path": "logo.png" }
},
"centerx": true,
"centery": true,
"opacity": 0.5
}
]
}
Image providers
An ImageProvider is the content source for a visual track. The type field determines what kind of content is provided.
Static images
| Type | Description |
|---|---|
stillimage | A single image loaded from a file (PNG, JPEG, TIFF, etc.). |
emptyimage | A solid color rectangle with specified dimensions. |
http | An image loaded from a URL at render time. |
gradient | A linear, radial, or conic gradient. |
{
"type": "stillimage",
"source": { "path": "photo.jpg" }
}
Video and sequences
| Type | Description |
|---|---|
video | A video file (MP4, MOV, WebM, etc.) decoded frame by frame. |
imagesequence | A sequence of numbered image files. |
livevideo | An HTTP live video stream. |
scenebased | Content generated by rendering another Scene or SceneView. |
{
"type": "video",
"source": { "path": "hero-video.mp4" },
"contentoffset": 30
}
Text rendering
| Type | Description |
|---|---|
textsimple | Rendered text with font, color, size, alignment, and optional outlines/textures. |
textcurved | Text rendered between two curved paths. |
textquad | Text rendered into a moving quadrilateral. |
textmultiline | Multi-line text rendered into a moving quadrilateral with automatic line wrapping. |
Text providers accept a text field as a StringVariable, which lets you inject dynamic content at render time:
{
"type": "textsimple",
"source": { "path": "fonts/Helvetica.ttf" },
"text": {
"type": "map",
"key": "name",
"defaultvalue": "World"
},
"fontsize_d": 64,
"color": { "red": 255, "green": 255, "blue": 255 },
"width": 1920,
"height": 200,
"xalignment": "centered",
"yalignment": "middle"
}
Charts and shapes
| Type | Description |
|---|---|
piechart | An animated pie chart. |
barchart | An animated bar chart. |
linechart | An animated line chart. |
bezier | An animated bezier curve. |
path | An animated filled or stroked bezier path. |
Other types
| Type | Description |
|---|---|
manipulatedimage | A single-frame image built from compositing multiple VisualTracks. |
masksource | An image provider that acts as a matte for compositing. |
string | Image data from a variable (RFC 2397 data URL or SVG XML). |
Transformations
Transformations modify the visual output of a track. They are applied in order and fall into three categories based on how they affect image dimensions:
Neutral size — output size matches input:
| Type | Description |
|---|---|
blur | Gaussian blur. |
motionblur | Directional motion blur. |
intensity | Adjust saturation. |
rotate | Rotate the image. |
invert | Invert colors. |
morph | Image morphing (MLS or TPS). |
mask | Apply an alpha mask. |
circularblur | Circular blur effect. |
radialblur | Radial blur effect. |
Specified size — output dimensions set by transformation parameters:
| Type | Description |
|---|---|
scaling | Scale to exact dimensions. |
scalingletter | Scale preserving aspect ratio (letterbox). |
scalingaspect | Scale to width or height preserving aspect ratio. |
scalingcrop | Scale and crop to exact dimensions. |
crop | Crop a region from the image. |
facedetect | Detect a face and crop around it. |
Context size — output dimensions determined by the movie/scene context:
| Type | Description |
|---|---|
warp | Warp image to quad tracking data (perspective transform). |
gridwarp | Warp image to grid tracking data. |
affine | 2D affine transformation (translate, rotate, scale, skew). |
animate | Animate position, scale, and rotation. |
pointpaste | Place image at a tracking point. |
flip | Flip the image horizontally or vertically. |
colortwist | 3D color matrix transformation. |
colorboost | Boost color saturation. |
splittoning | Colorize specific range, grayscale the rest. |
tiltshift | Depth of field simulation. |
glow | Glow effect. |
vignette | Darken edges with circular falloff. |
comic | Cartoon/comic effect. |
textshadow | Drop shadow. |
cubelut | Apply a 3D color lookup table (.cube or .acv file). |
Example — scale an image to fit and apply a vignette:
{
"transformations": [
{
"type": "scalingcrop",
"scale": [1920, 1080]
},
{
"type": "vignette",
"aperture": 0.6,
"brightness": 0.3
}
]
}
Audio
The Audio container holds an array of AudioTrack objects that are mixed together. Audio can be defined at the movie level (plays across all scenes) or at the scene level.
Common AudioTrack types:
| Type | Description |
|---|---|
video_mix | Extract audio from a video file, mix with other tracks. |
video_replace | Extract audio from a video file, replace other tracks. |
mix | Mix a WAV audio file with other tracks. |
replace | Replace other tracks with a WAV audio file. |
empty | Silent track (useful as a spacer). |
changevolume | Adjust volume of all preceding tracks. |
scenebased | Audio from a SceneView. |
backtoback | Play inner audio tracks sequentially. |
Key fields:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
type | String | No | "preencoded" | Audio track type. |
source | FileLocation | No | — | Audio source file (WAV, MP3, or video file with audio track). |
volume | Number | No | 1.0 | Volume level for mixing (0.0 = silent, 1.0 = full). |
volumefunction | Function | No | — | Animate volume over time (for fade in/out effects). |
gain | Number | No | 0 | Gain adjustment in decibels. |
offset | Integer | No | 0 | Start offset in frames from the beginning of the scene. |
contentoffset | Integer | No | 0 | Skip this many frames into the audio source before playback. |
numframes | Integer | No | — | Duration in frames (omit to use full source length). |
levelmode | String | No | "legacy" | Clipping mode: legacy, hardclip, softclip, or softclip4x. |
Example — background music with a 1-second fade-in:
{
"audio": {
"audiotracks": [
{
"type": "video_mix",
"source": { "path": "music.mp4" },
"volume": 0.6,
"volumefunction": {
"type": "linear",
"param1": 0.0,
"param2": 1.0
}
}
]
}
}
Variables
Variables are the mechanism for injecting dynamic content into templates at render time. The most common type is map, which reads a value from the render request’s query parameters.
Common variable types
| Type | Description |
|---|---|
map | Reads a value from the render request parameters. This is the primary variable type for personalized videos. |
constant | A fixed string value. |
http | Fetches a value from a URL at render time. |
datasource | Reads from a data source (for batch rendering). |
condition | Returns different values based on a condition. |
logical | Boolean logic (and/or/not) combining other variables. |
counter | An incrementing counter (useful for animated numbers). |
format | A template string that combines other variables. |
transform | Applies string transformations (uppercase, lowercase, trim, etc.). |
structureddata | Extracts values from JSON using JSONPath expressions. |
Using variables in templates
A map variable with key "name" reads the name parameter from the render request. At render time, if you pass ?name=Alex, the variable resolves to "Alex":
{
"type": "map",
"key": "name",
"defaultvalue": "Friend"
}
A condition variable that shows different text based on a parameter:
{
"type": "condition",
"conditionvariable": {
"type": "map",
"key": "gender"
},
"conditionvalue": "female",
"truevalue": "her",
"falsevalue": "his"
}
A format variable that combines multiple values:
{
"type": "format",
"formatstring": "Welcome to {company}, {name}!",
"formatvariables": [
{ "type": "map", "key": "company" },
{ "type": "map", "key": "name" }
]
}
Functions (animation curves)
Functions define how values change over time. They are used for animating opacity, volume, font size, spacing, and other numeric properties. A Function takes a completion value (0.0 at the start of the containing element, 1.0 at the end) and returns a result.
Basic function types
| Type | Description |
|---|---|
linear | Linear interpolation from param1 to param2. |
constant | Always returns param1. |
power | Power curve — param1 is the exponent. Useful for ease-in effects. |
sin | Sine wave. |
sigmoid | S-curve (smooth ease-in-ease-out). |
overshoot | Goes past the target and bounces back. |
undershoot | Pulls back before moving to the target. |
inverted | Reverses the direction (1.0 to 0.0). |
timed | Maps a time range to a 0-1 completion value. param1 = start, param2 = end. |
keyframe | Uses keyframe points for custom curves. |
Composition
Functions can be composed using add, subtract, multiply with an innerfunction and metafunctionparam:
{
"type": "multiply",
"innerfunction": { "type": "linear", "param1": 0, "param2": 1 },
"metafunctionparam": { "type": "timed", "param1": 0.0, "param2": 0.2 }
}
Example — fade in opacity over the first 20% of the track
{
"opacity": 0.0,
"opacityfunction": {
"type": "linear",
"param1": 0.0,
"param2": 1.0,
"innerfunction": {
"type": "timed",
"param1": 0.0,
"param2": 0.2
}
}
}
Keyframe animation
For complex curves, use the keyframe type with an array of points:
{
"type": "keyframe",
"keyframes": [
{ "x": 0.0, "y": 0.0 },
{ "x": 0.3, "y": 1.0 },
{ "x": 0.7, "y": 0.8 },
{ "x": 1.0, "y": 1.0 }
]
}
Colors
Colors can be static, dynamic (from a variable), or animated (keyframed).
Static color
RGBA values from 0-255:
{ "red": 0, "green": 212, "blue": 255, "alpha": 255 }
Dynamic color
Read a hex color from a render parameter:
{
"type": "stringvariable",
"string": {
"type": "map",
"key": "brandcolor",
"defaultvalue": "#00D4FF"
}
}
Animated color
Transition between colors using keyframes:
{
"type": "keyframe",
"keyframes": [
{ "completion": 0.0, "color": { "red": 255, "green": 0, "blue": 0 } },
{ "completion": 1.0, "color": { "red": 0, "green": 0, "blue": 255 } }
],
"keyframefunction": { "type": "sigmoid" }
}
File locations
The FileLocation message specifies where to find a file. It supports variable substitution with special tokens:
| Token | Replacement |
|---|---|
$variable | Replaced with the value of the variable StringVariable. |
$session | Replaced with the current session or render token ID. |
$frame | Replaced with the current frame number (zero-padded). |
{
"path": "photos/$variable.jpg",
"variable": {
"type": "map",
"key": "photo_id"
},
"padding": 5
}
Complete example
A 10-second welcome video with a background video, a personalized text overlay with fade-in animation, a logo, and background music:
{
"params": {
"vparams": {
"width": 1920,
"height": 1080,
"videocodec": "VIDEO_X264",
"videobitrate": 5000,
"videoframerate": { "num": 25, "den": 1 }
},
"aparams": {
"audiocodec": "AUDIO_AAC",
"audioabr": 128000,
"audiosamplerate": 44100
}
},
"scenes": [
{
"numframes": 250,
"tracks": [
{
"content": {
"type": "video",
"source": { "path": "background.mp4" }
},
"transformations": [
{ "type": "scalingcrop", "scale": [1920, 1080] }
]
},
{
"content": {
"type": "textsimple",
"source": { "path": "fonts/Montserrat-Bold.ttf" },
"text": {
"type": "format",
"formatstring": "Welcome, {0}!",
"formatvariables": [
{ "type": "map", "key": "name", "defaultvalue": "Friend" }
]
},
"fontsize_d": 72,
"color": { "red": 255, "green": 255, "blue": 255 },
"width": 1600,
"height": 200,
"xalignment": "centered",
"yalignment": "middle"
},
"centerx": true,
"y": 440,
"opacity": 0.0,
"opacityfunction": {
"type": "linear",
"param1": 0.0,
"param2": 1.0,
"innerfunction": {
"type": "timed",
"param1": 0.0,
"param2": 0.15
}
}
},
{
"content": {
"type": "stillimage",
"source": { "path": "logo.png" }
},
"x": 60,
"y": 60
}
],
"audio": {
"audiotracks": [
{
"type": "video_mix",
"source": { "path": "music.mp4" },
"volume": 0.4,
"volumefunction": {
"type": "linear",
"param1": 0.0,
"param2": 1.0,
"innerfunction": {
"type": "timed",
"param1": 0.0,
"param2": 0.08
}
}
}
]
}
}
]
}
Uploading SDL via API
Use the Project API to upload SDL templates:
curl -X PUT -u YOUR_API_KEY:YOUR_API_SECRET \
-H "Content-Type: application/json" \
--data-binary @welcome-video.json \
https://api-eu-west-1.impossible.io/v1/sdl/PROJECT_UID/welcome-video
After uploading, publish the project if needed, then render the video using the movie name you chose.
Further reading
- SDL Reference — Complete field-by-field reference for every SDL message, field, and enum.
- Rendering Videos — How to render your published templates.
- Project API — Managing projects and uploading SDL via API.
- Batch Rendering — Render thousands of personalized videos from data.