Transformations
Transformations are small workers that run inside a component. They help you reshape streams without writing custom code for every step.
Use transformations to keep components focused on a single task and to avoid boilerplate code.
Constant
Emits a single, predefined static value — continuously or once — depending on how the pipeline is triggered. This is useful for injecting fixed data like default parameters, sentinel values, or test constants.
Output
42
🧠 Example: Injecting a Fixed Threshold
You have a Filter Threshold
component that compares incoming scores against a constant value. Instead of hardcoding it in the logic, you create a Constant
node that always emits:
0.85
This makes the threshold visible, configurable, and testable as part of the pipeline — ideal for tuning or experimentation.
✅ Use Constant whenever you want to introduce a static piece of information into your dataflow.
Convert Value
Converts one atomic type to another following standard C++ type conversion rules. Useful when a downstream component expects a different type than the one being emitted.
Input
1 // Int32
Output
1.0 // Double
🧠 Example: Precision Upgrade for Calculations
Suppose a sensor outputs Int32
values, but your analytics component needs floating-point precision for further calculations. Instead of modifying the sensor component, insert a Convert Value
node that transforms:
Int32 → Double
This ensures downstream logic receives data in the expected format without coupling it to upstream type definitions.
✅ Use Convert Value
for safe, explicit type bridging between mismatched atomic types.
Delay By One
Delays a stream by one message, emitting a user-defined initial value first. This is useful when the downstream logic needs access to the previous message in a stream.
Input stream
1
Output stream
0
1
🧠 Example: Calculating Delta from Previous Value
You're monitoring a sensor stream and want to compute the change (delta) between each current and previous reading.
To do this, you:
- Use
Delay By One
to create a stream of previous values. - Connect both current and delayed streams into a subtraction component.
This lets you compute:
delta = current - previous
✅ Delay By One is essential for stateful computations like moving averages, change detection, or temporal comparisons.
Filter
Emits a value only if the preceding input is true
. The first input must be a Bool
; the second input is any type of value to be conditionally forwarded.
Inputs
true
1
Output
1
🧠 Example: Emitting Events Only on Alert
Imagine a monitoring pipeline where a component raises a Bool
flag if a threshold is breached. You can use Filter
to conditionally pass the raw sensor value only when an alert is active:
[ isAlert: true, value: 87.5 ] → 87.5
[ isAlert: false, value: 75.0 ] → (no output)
✅ Use Filter
to build event-driven pipelines, where values are emitted based on dynamic conditions.
Flatten
Transforms a list (array) type into a stream of its individual elements. Each list entry becomes a separate message on the output stream.
Input
[0, 1, 2, 3, 4]
Output
0
1
2
3
4
🧠 Example: Processing Video Frame Detections
A component outputs a list of detected object IDs per video frame, e.g.:
[105, 204, 305]
To evaluate or score each detection individually, use Flatten
to emit one ID at a time to the downstream processing block.
✅ Flatten
is ideal when you need to independently handle items from a batch or collection.
Join
Performs a keyed join between two input streams, similar to a SQL join. It emits a tuple of matching values when both streams produce the same key.
Input stream 1
("id-1", "Alice")
("id-2", "Bob")
Input stream 2
("id-1", 92)
("id-2", 85)
Output stream
("id-1", ("Alice", 92))
("id-2", ("Bob", 85))
🧠 Example: Joining Names and Scores
Imagine two components:
- One outputs
(user_id, name)
- Another outputs
(user_id, score)
Using Join
, you can match users to their scores and send the result to a downstream report generator or leaderboard formatter.
✅ Perfect for merging related data sources before further analysis or formatting.
Length
Returns the number of elements in a list.
Input
[0, 1, 2, 3, 4]
Output
5
🧠 Example: Counting Detections per Frame
Imagine a component outputs a list of detected object IDs for each video frame:
[101, 102, 103]
Using Length, you can count how many objects were detected in that frame:
→ 3
✅ Length
is useful for monitoring, threshold checks, or metadata generation based on list sizes.
Lift Reroll
Collect streamed elements into lists of a given size.
Sizes
2
3
5
Elements
0
1
2
3
4
5
6
7
8
9
Output
[0, 1]
[2, 3, 4]
[5, 6, 7, 8, 9]
🧠 Example: Dynamic Batch Reconstruction
Imagine a sensor sends measurements one by one, but downstream logic expects grouped batches. The component before Lift Reroll
specifies how many values to include in each batch based on current load or frame metadata.
Sizes: [4, 2]
Values: 21, 22, 23, 24, 25, 26
→ Output:
[21, 22, 23, 24]
[25, 26]
✅ Useful when re-batching streams or aligning message grouping to dynamic criteria (e.g., event triggers or external pacing).
Lift Unroll
Outputs the length of a list, followed by each of its elements as a separate message.
Input
[0, 1, 2, 3, 4]
Output size
5
Output items
0
1
2
3
4
🧠 Example: Processing Batch with Item Count
Suppose a component receives a batch of predictions and you want to:
- Know how many predictions are in the batch, and
- Process each one individually.
Input: [label_1, label_2, label_3]
→ Output size: 3
→ Output items: label_1, label_2, label_3
✅ Lift Unroll is ideal for batch-aware processing, where both item count and individual elements are relevant for downstream logic.
Pack Named
Wraps a structured value (such as a record) into its corresponding named type. This is commonly used when constructing or converting raw data into a reusable, strongly typed format.
Input
{ class: DetectedClass, rectangle: Rectangle<Double> }
Output
BoundingBox
🧠 Example: Wrapping a Detection Result
Let’s say you’ve built a pipeline that performs object classification and bounding box prediction in separate components:
Classify Object
returns aDetectedClass
(e.g.{ id: 3, confidence: 0.94 }
)Predict Box
returns aRectangle<Double>
(e.g.{ x: 42.1, y: 27.5, width: 110, height: 88 }
)
Now, you want to pass both outputs as a single object into a component like Track Objects
, which expects a BoundingBox
type:
BoundingBox := {
class: DetectedClass,
rectangle: Rectangle<Double>
}
You use Pack Named
to combine the two inputs into the expected named type.
✅ Now your Track Objects
component receives the full context — both classification and location — wrapped in a clean, reusable structure.
Pack Record
Combines multiple individual streams into a single record object. Each stream is assigned to a named field in the resulting structure. This is useful when a downstream component expects structured input with specific field names.
Input
DetectedClass
Rectangle<Double>
UInt64
Output
{ class: DetectedClass, rectangle: Rectangle<Double>, counter: UInt64 }
🧠 Example: Structuring Inputs for Unified Processing
You have separate outputs from three components:
- A classification result as
DetectedClass
- A bounding box as
Rectangle<Double>
- A stream of detection counts as
UInt64
You want to send these into an imaginary Detect Store
component that expects them together in a structured format. Using Pack Record
, you can easily wrap all three into a single object, making the pipeline cleaner and easier to maintain.
Pack Tuple
Combines multiple individual input streams into a single tuple, preserving the order of elements. Useful when a component expects grouped data as a single structured unit.
Input
Segmentation
BoundingBox
Output
(Segmentation, BoundingBox)
🧠 Example: Combining Model Outputs
You’ve built two components in your pipeline:
Segment Objects
– performs image segmentation and outputs aSegmentation
mapDetect Objects
– detects objects and outputs aBoundingBox
Now, you want to combine both outputs into a single tuple and pass it to a downstream component, Annotate Image
, which takes a (Segmentation, BoundingBox)
as input to render annotated overlays.
That’s where Pack Tuple
comes in.
✅ The downstream component now receives both inputs as a single, ordered unit — exactly as expected.
Pack Union
Combines multiple input streams into a single union-type stream, where each value retains its original type. This is useful when different data variants need to be processed uniformly by a downstream component.
Input
Image.BGR
Image.GRAY
Image.RGB
Output
Image.BGR | Image.GRAY | Image.RGB
🧠 Example: Unifying Multiple Image Sources
Imagine you have three separate image sources:
- A camera providing
Image.BGR
- A thermal sensor emitting
Image.GRAY
- A pre-processed archive yielding
Image.RGB
You want to send them all to a single Classify Image
component that accepts any of the three formats. Using Pack Union
, you merge the streams into one and let the Classify Image
handle them accordingly based on the actual type at runtime.
✅ This simplifies pipeline design when a single component can handle multiple related formats.
Repeat
Repeats the second input value n times, where n is the first input.
Input count
5
Input stream
"this is a message"
Output stream
"this is a message"
"this is a message"
"this is a message"
"this is a message"
"this is a message"
🧠 Example: Repeating Labels for a Batch
Suppose you have a label or instruction (e.g. "turn left"
) that needs to be attached to 5 consecutive frames in a video. Use Repeat
to generate that label 5 times for downstream alignment:
[5, "turn left"] → "turn left" × 5
✅ Repeat
is useful for broadcasting control messages, labels, or signals across multiple data units.
Select Stream
Selects one stream from multiple input streams based on a numeric index.
Numeric index
1
Input streams
Stream 0: "apple"
Stream 1: "banana"
Stream 2: "cherry"
Output stream
"banana"
🧠 Example: Dynamic Source Switching
Imagine a component that can pull from multiple data sources — a live camera feed, a prerecorded video, or a synthetic generator. Based on a user’s selection, you can dynamically route only the desired stream for processing:
Selector: 2
Stream 0: camera
Stream 1: video file
Stream 2: synthetic feed
→ Output: synthetic feed
✅ Ideal for interactive applications, fallback systems, or dynamic pipeline routing.
Shuffle
Groups incoming pairs by key, emitting grouped values with matching keys.
Input
("a", 1)
("b", 2)
("a", 3)
("b", 4)
("a", 5)
Output
("a", [1, 3, 5])
("b", [2, 4])
🧠 Example: Aggregating Sensor Readings by Type
Suppose you’re collecting temperature, humidity, and pressure readings from a sensor network. Each reading is tagged with its type:
("temperature", 22.5)
("humidity", 50.1)
("temperature", 23.0)
("humidity", 49.8)
Using Shuffle
, you group by reading type:
("temperature", [22.5, 23.0])
("humidity", [50.1, 49.8])
✅ Great for batching, reducing, or analyzing grouped data before further processing.
Unite Streams
Merges multiple input streams into a single stream by emitting values in the order they arrive.
Input streams
Stream A: 1 3 5
Stream B: 2 4 6
Output stream
1
2
3
4
5
6
🧠 Example: Merging Multiple Sensor Feeds
Suppose you are processing telemetry from two different sensors — one tracking temperature and the other humidity. Unite Streams
allows you to combine both into a single unified stream for chronological analysis or downstream storage.
Temp: 22.5 22.6
Humidity: 41% 43%
→ Output:
22.5
41%
22.6
43%
✅ Great for multi-source event fusion, consolidating parallel streams, or logging unified timelines.
Unpack Named
Reveals the underlying structure of a named type by replacing it with its original definition. This is useful when you need to inspect, manipulate, or match fields inside a named type (like a struct) — especially when working with transformations, validation, or documentation tools.
Input
BoundingBox
Output
{ class: DetectedClass, rectangle: Rectangle<Double> }
🧠 Example: Accessing Inner Fields for Routing
Imagine you’re working with a component that receives a BoundingBox
type:
BoundingBox := {
class: DetectedClass,
rectangle: Rectangle<Double>
}
You now need to route data based on the object class (class.id
) or apply different logic to the rectangle values. However, downstream components don’t understand the BoundingBox
named type — they expect plain records like DetectedClass
or Rectangle
.
To solve this, you use Unpack Named
to expose the inner fields of the named type.
✅ Now your routing and processing logic can access and work with the underlying structure directly — without needing to manually redefine or replicate the type.
Unpack Record
Extracts individual fields from a record and emits each one as a separate output, preserving the field order. Useful when downstream components operate on single values or require access to only a subset of a structured object.
Input
{ class: DetectedClass, rectangle: Rectangle<Double>, counter: UInt64 }
Output
DetectedClass
Rectangle<Double>
UInt64
🧠 Example: Splitting Inputs for Specialized Processing
A component outputs detection results as a record:
{
class: DetectedClass,
rectangle: Rectangle<Double>,
counter: UInt64
}
You want to:
- Feed
class
into a label tracker - Send
rectangle
to an overlay renderer - Use
counter
for logging
With Unpack Record
, you can split the record cleanly into separate streams — each routed to the appropriate tool in your pipeline.
Unpack Tuple
Splits a tuple into its individual components, producing a separate output stream for each element in the tuple. Useful when downstream components need to handle each item independently.
Input
(Segmentation, BoundingBox)
Output
Segmentation
BoundingBox
🧠 Example: Processing Tuple Elements Separately
You have a component that outputs a tuple (Segmentation, BoundingBox)
— combining results from a model that performs both semantic segmentation and object detection in a single pass.
However, your pipeline now needs to:
- Feed the
Segmentation
into a component calledColorize Mask
- Send the
BoundingBox
toCrop Objects
component
Since these downstream components require separate inputs, you use Unpack Tuple
to split the tuple into two distinct output streams.
✅ Each downstream component now receives exactly the type it expects — no manual extraction or indexing needed.
Unpack Union
Splits a union-typed stream into multiple separate output streams — one for each variant in the union. Only one of the outputs is active at a time, depending on the incoming value.
Input
Image.BGR | Image.GRAY | Image.RGB
Output
Image.BGR
Image.GRAY
Image.RGB
🧠 Example: Handling Multiple Image Formats
You have a component that can emit different image formats depending on the source:
Image.BGR | Image.GRAY | Image.RGB
But your downstream processing tools are format-specific:
- One expects
Image.BGR
- Another handles
Image.GRAY
- A third is built for
Image.RGB
Using Unpack Union
, you route each format to the appropriate handler, without writing conditional logic manually.
✅ This enables clean separation of concerns and format-specific optimizations in your pipeline.