Examples
Prioritizing Tasks Based on File Names¶
This example demonstrates how to set up an agent node early in a Workflow to automatically assign a priority to each task before advancing it to the annotation stage.
Example Workflow¶
The following workflow illustrates how tasks are prioritized:
STEP 1: Create the Agent file¶
Start by creating a new file named agent.py
and include the following code:
from encord.objects.ontology_labels_impl import LabelRowV2
from encord_agents.tasks import Runner
runner = Runner(project_hash="<your_project_hash>")
@runner.stage("<your_agent_stage_uuid>")
def by_file_name(lr: LabelRowV2) -> str | None:
# Assuming the data_title is of the format "%d.jpg"
# and in the range [0; 100]
priority = int(lr.data_title.split(".")[0]) / 100
lr.set_priority(priority=priority)
return "<your_pathway_uuid>"
if __name__ == "__main__":
runner.run()
- Task Runner: The code initializes a runner to process tasks.
- Priority Assignment: It defines a stage implementation that:
- Extracts the data title of a task.
- Parses the stem of the data title as an integer.
- Assigns a priority as a number between
0
and1
. - Task Routing: Passes the task to the annotation stage by returning the correct pathway
UUID
.
STEP 2: Running the Agent¶
Follow these steps to make the agent functional:
- Authentication & Setup:
- Export your private key as explained in the authentication guide.
-
Install the
encord_agents
package, as detailed in the installation guide. -
Workflow Creation:
-
Set up a workflow similar to the example shown above.
-
Code Setup:
-
Copy the code into an
agent.py
file. -
Adjust IDs:
-
Update
<your_project_hash>
,<your_agent_stage_uuid>
, and<your_pathway_uuid>
in the code with the corresponding IDs from your workflow. -
Run the Agent:
- Execute the script by running the following command in your terminal:
Your agent now assigns priorities to tasks based on their file names and routes them appropriately through the Workflow.
Transferring Labels to a Twin Project¶
This example demonstrates how to transfer checklist labels from "Project A" and convert them into yes/no radio labels in "Project B."
Assumptions¶
- Ontology in Project A:
The Ontology in Project A contains checklist classifications, as shown below:
- Ontology in Project B:
Every completed task in Project A is translated into a "model-friendly version" with radio classifications in Project B:
Notice that Project B has three classifications with identical names to those in Project A, but with two radio options each.
STEP 1: Create the Agent file¶
An agent can perform this translation using the dep_twin_label_row
dependency. For every label row from Project A, the agent automatically fetches the corresponding label row (and optionally the Workflow task) from Project B.
Note: Both Project A and Project B must be linked to the same datasets.
Create an agent file named twin_project.py
using the following code as a template.
from encord.objects.ontology_labels_impl import LabelRowV2
from encord.objects.options import Option
from encord.workflow.stages.agent import AgentTask
from typing_extensions import Annotated
from encord_agents.tasks import Depends, Runner
from encord_agents.tasks.dependencies import Twin, dep_twin_label_row
# 1. Setup the runner
runner = Runner(project_hash="<project_hash_a>")
# 2. Get the classification attribute used to query answers
checklist_classification = (
runner.project.ontology_structure.classifications[0] # type: ignore
)
checklist_attribute = checklist_classification.attributes[0]
# 3. Define the agent
@runner.stage(stage="<transfer_agent_stage_uuid>")
def copy_labels(
manually_annotated_lr: LabelRowV2,
twin: Annotated[Twin, Depends(dep_twin_label_row(twin_project_hash="<project_hash_b>"))],
) -> str | None:
# 4. Reading the checkboxes that have been set
instance = manually_annotated_lr.get_classification_instances()[0]
answers = instance.get_answer(attribute=checklist_attribute)
if answers is None or isinstance(answers, (str, Option)):
return None
set_options = {o.title for o in answers} # Use title to match
# 5. Set answer on the sink labels
for radio_clf in twin.label_row.ontology_structure.classifications:
ins = radio_clf.create_instance()
attr = radio_clf.attributes[0]
if radio_clf.title in set_options:
ins.set_answer(attr.options[0])
else:
ins.set_answer(attr.options[1])
ins.set_for_frames(frames=0)
twin.label_row.add_classification_instance(ins)
# 6. Save labels and proceed tasks
twin.label_row.save()
if twin.task and isinstance(twin.task, AgentTask):
twin.task.proceed(pathway_uuid="<twin_completion_pathway_uuid>")
return "<labeling_completion_pathway_uuid>"
if __name__ == "__main__":
runner.run()
STEP 2: Set up Workflow¶
The following are examples of Workflows to be used. Create and save a Workflow template for each workflow.
- Project A Workflow:
- Project B Workflow:
With this configuration, all manual work happens in Project A, while Project B mirrors the transformed labels.
STEP 3: Link Agent to Workflow¶
To link the agent to a workflow stage, use the following decorator:
The stage
uuid
in the decorator must match the "label transfer" agent stageuuid
in Project A's Workflow.
STEP 4: Prepare your Projects¶
- Set Up Projects:
- Create two Projects with the Ontologies and workflows illustrated above.
- Ensure that the classification names match across both ontologies.
- Both projects must point to the same dataset(s).
STEP 5: Run the Agent¶
- Export your private key, as explained in the authentication guide.
- Install the
encord_agents
package, as detailed in the installation guide. - Update the code to reflect:
<project_hash_a>
and<project_hash_b>
for your Projects.- The
stage
argument to match the agent stageuuid
in Project A's workflow. - Completion pathway
uuids
for your Workflows. - Execute the agent file:
Once the agent is running, tasks approved in Project A’s review stage move to the "Complete" stage in Project B, with the labels automatically translated and displayed.
Pre-label video with fake predictions¶
Step 1: Define a fake model for predictions¶
Suppose you have a fake model like this one, which predicts labels, bounding boxes, and confidences.
@dataclass
class ModelPrediction:
label: int
coords: BoundingBoxCoordinates
conf: float
def fake_predict(image: NDArray[np.uint8]) -> list[ModelPrediction]:
return [
ModelPrediction(
label=random.choice(range(3)),
coords=BoundingBoxCoordinates(
top_left_x=random.random() * 0.5,
top_left_y=random.random() * 0.5,
width=random.random() * 0.5,
height=random.random() * 0.5,
),
conf=random.random() + 0.5,
)
for _ in range(10)
]
model = fake_predict
Step 2: Set up your Ontology¶
Create an Ontology that matches the expected output of your pre-labeling agent. For example:
Step 3: Create a Workflow with a pre-labeling agent node¶
Create a Workflow template that includes a pre-labeling agent node before the annotation stage to automatically pre-label tasks with model predictions.
Step 4: Create your pre-labeling agent¶
Create a pre-labeling agent using the following code as a template:
import random
from dataclasses import dataclass
from typing import Iterable
import numpy as np
from encord.objects.coordinates import BoundingBoxCoordinates
from encord.objects.ontology_labels_impl import LabelRowV2
from encord.project import Project
from numpy.typing import NDArray
from typing_extensions import Annotated
from encord_agents.core.data_model import Frame
from encord_agents.tasks import Depends, Runner
from encord_agents.tasks.dependencies import dep_video_iterator
runner = Runner(project_hash="<project_hash>")
# === BEGIN FAKE MODEL === #
@dataclass
class ModelPrediction:
label: int
coords: BoundingBoxCoordinates
conf: float
def fake_predict(image: NDArray[np.uint8]) -> list[ModelPrediction]:
return [
ModelPrediction(
label=random.choice(range(3)),
coords=BoundingBoxCoordinates(
top_left_x=random.random() * 0.5,
top_left_y=random.random() * 0.5,
width=random.random() * 0.5,
height=random.random() * 0.5,
),
conf=random.random() + 0.5,
)
for _ in range(10)
]
model = fake_predict
# === END FAKE MODEL === #
@runner.stage(stage="pre-label")
def run_something(
lr: LabelRowV2,
project: Project,
frames: Annotated[Iterable[Frame], Depends(dep_video_iterator)],
) -> str:
ontology = project.ontology_structure
for frame in frames:
outputs = model(frame.content)
for output in outputs:
ins = ontology.objects[output.label].create_instance()
ins.set_for_frames(frames=frame.frame, coordinates=output.coords, confidence=output.conf)
lr.add_object_instance(ins)
lr.save()
return "annotate" # Tell where the task should go
if __name__ == "__main__":
runner.run()
This code uses the dep_video_iterator
dependency to automatically load an iterator of frames as RGB numpy arrays from the video.
Step 5: Run the agent¶
Follow these steps to execute the agent:
- Ensure that you have exported your private key, as described in the authentication section, and installed the
encord_agents
package, as explained in the installation guide. - Confirm that your Project includes a stage named "pre-label" with a pathway named "annotate", and that its ontology resembles the example above.
- Replace
<project_hash>
in the script with your own Project hash. - Execute the script using the following command:
Step 6: Verify pre-labeled annotations¶
Once the agent completes, start annotating. You should see frames pre-populated with bounding boxes generated by the fake model predictions.
Further examples available soon¶
T following Agent examples will become available soon:
- Pre-labeling with YoloWorld
- Transcribing with Whisper
- Routing with Gemini
- Prioritizing with GPT-4o mini
- Evaluating Training projects
- HF Image segmentation API
- HF LLM API to classify frames