-
Print
-
DarkLight
-
PDF
Pipelines Overview
-
Print
-
DarkLight
-
PDF
Dataloop’s Pipeline
Create automated models that weave together humans and machines to process data in a pipeline architecture – a series of nodes, where each node’s output is the input of the next one.
The Dataloop pipeline process allows transitioning data between
- labeling tasks
- quality assurance tasks
- functions installed in the Dataloop system
- code snippets
- machine learning (ML) models
Your data can be filtered by any criteria, split, merged, and change status in the process.
Altogether, Dataloop’s pipeline can
🗸 facilitate any production pipeline
🗸 preprocess data and label it
🗸 automate operations using applications and models
🗸 postprocess data and train models of any type or scale at the highest performance and availability standards
The following example shows a pipeline where data is preprocessed by code (e.g., a video is cut into frames) and then directed to three different tasks that run in parallel. The items marked as completed are sent to a separate task (e.g., QA task), whereas the items that are of status discard are sent to a separate dataset.