Use model from Hugging Face 🤗¶
This notebook demonstrates how to use a task agent to pre-label videos with predictions. Here we'll use a bounding box prediction model.
Before we start, let's get installations and authentication out of the way.
Step 1: Set up environment¶
Installation¶
Please ensure that you have the encord-agents
library installed:
!python -m pip install encord-agents
Authentication¶
The library authenticates via ssh-keys. Below, is a code cell for setting the ENCORD_SSH_KEY
environment variable. It should contain the raw content of your private ssh key file.
If you have not yet setup an ssh key, please follow the documentation.
💡 Colab users: In colab, you can set the key once in the secrets in the left sidebar and load it in new notebooks with
from google.colab import userdata key_content = userdata.get("ENCORD_SSH_KEY")
from google.colab import userdata
key_contet = userdata.get("ENCORD_SSH_KEY")
import os
os.environ["ENCORD_SSH_KEY"] = key_contet
# or you can set a path to a file
# os.environ["ENCORD_SSH_KEY_FILE"] = "/path/to/your/private/key"
[Alternative] Temporary Key¶
There's also the option of generating a temporary (fresh) ssh key pair via the code cell below. Please follow the instructions printed when executing the code.
# ⚠️ Safe to skip if you have authenticated already
import os
from encord_agents.utils.colab import generate_public_private_key_pair_with_instructions
private_key_path, public_key_path = generate_public_private_key_pair_with_instructions()
os.environ["ENCORD_SSH_KEY_FILE"] = private_key_path.as_posix()
Step 2: Define a model for predictions¶
We will define a model which predicts labels, bounding boxes, and confidences. We'll use the model to predict objects on frames from videos below.
💡 Hint: If you wish, to use an alternate model or your own model from HF, here is the place you'll modify
We'll use the DETR model from Hugging Face as described in this article: https://huggingface.co/docs/transformers/en/model_doc/detr
Other models are available from: https://huggingface.co/models
import requests
import torch
from PIL import Image
from transformers import DetrForObjectDetection, DetrImageProcessor
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# you can specify the revision tag if you don't want the timm dependency
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
# convert outputs (bounding boxes and class logits) to COCO API
# let's only keep detections with score > 0.9
target_sizes = torch.tensor([image.size[::-1]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0]
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(
f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}"
)
Define the agent¶
We've defined the model and now we want to define the agent. Think about some long-lived mechanism of using the agent for pre-labeling in this scenario
from dataclasses import dataclass
import numpy as np
from encord.objects.coordinates import BoundingBoxCoordinates
from numpy.typing import NDArray
# Data class to hold predictions from our model
@dataclass
class ModelPrediction:
featureHash: str
coords: BoundingBoxCoordinates
conf: float
def HF_DETR_predict(image: NDArray[np.uint8]) -> list[ModelPrediction]:
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
target_sizes = torch.tensor([image.shape[:2]])
results = processor.post_process_object_detection(outputs, target_sizes=target_sizes)[0]
model_predictions = []
for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
# We'll skip predictions with confidence < 0.8
# As this model makes a lot of predictions
if score < 0.8:
continue
print(
f"Detected {model.config.id2label[label.item()]} with confidence "
f"{round(score.item(), 3)} at location {box}"
)
if ontology_equivalent := ontology_map.get(model.config.id2label[label.item()]):
model_predictions.append(
ModelPrediction(
featureHash=ontology_equivalent,
coords=BoundingBoxCoordinates(top_left_x=box[0], top_left_y=box[1], width=box[2], height=box[3]),
conf=score.item(),
)
)
return model_predictions
agent = HF_DETR_predict
Step 3: Set up your Ontology¶
Create an Ontology that matches the expected output of your pre-labeling agent.
For example, if your model predicts classes surfboard
, person
, and car
, then the ontology should look like this: Our DETR model predicts more objects but we'll focus on the car predictions in this example
📖 here is the documentation for creating ontologies.
3.1 Define an Ontology Map¶
We need to translate the model predictions so that they are paired against the respective Encord ontology item. This is easiest done via the featureNodeHash of the target. This can be found in the app either via the Ontology preview JSON or via using the SDK.
ontology_map = {"car": "80fUMkkZ"}
Step 4: Create a Workflow with a pre-labeling agent node¶
Create a project in the Encord platform that has a Workflow that includes a pre-labeling agent node before the annotation stage to automatically pre-label tasks with model predictions. This node is where we'll hook in our custom code to pre-label the data.
Notice how the workflow has a purple Agent node called "pre-label." This node will allow our custom code to run inference over the data before passing it on to the annotation stage.
📖 here is the documentation for creating a workflow with Encord.
Step 5: Define the pre-labelling agent¶
The following code provides a template for defining an agent that does pre-labeling. We assume that the project only contains videos and the we want to do pre-labeling on all frames in each video.
If your agent node is named "pre-label" and the pathway to the annotation stage is named "annotate," you will only have to change the <project_hash>
to your actual project hash to make it work.
Is your naming, on the other hand, different, then you can update the stage
parameter of the decorator and the returned string, respectively, to comply with your own setup.
Note that this code uses the dep_video_iterator
dependency to automatically load an iterator of frames as RGB numpy arrays from the video.
from typing import Iterable
from encord.objects.ontology_labels_impl import LabelRowV2
from encord.project import Project
from typing_extensions import Annotated
from encord_agents.core.data_model import Frame
from encord_agents.tasks import Depends, Runner
from encord_agents.tasks.dependencies import dep_video_iterator
# a. Define a runner that will execute the agent on every task in the agent stage
runner = Runner(project_hash="<project-hash>")
# b. Specify the logic that goes into the "pre-label" agent node.
@runner.stage(stage="pre-label")
def pre_segment(
lr: LabelRowV2,
project: Project,
frames: Annotated[Iterable[Frame], Depends(dep_video_iterator)],
) -> str:
ontology = project.ontology_structure
# c. Loop over the frames in the video
for frame in frames: # For every frame in the video
# d. Predict - we could do batching here to speed up the process
outputs = agent(frame.content)
# e. Store the results
for output in outputs:
ins = ontology.get_child_by_hash(output.feature_hash).create_instance()
ins.set_for_frames(frames=frame.frame, coordinates=output.coords, confidence=output.conf)
lr.add_object_instance(ins)
lr.save()
return "annotate" # Tell where the task should go
Running the agent¶
Now that we've defined the project, workflow, and the agent, it's time to try it out.
The runner
object is callable which means that you can just call it to prioritize your tasks.
# Run the agent
runner()
Your agent now assigns labels to the videos and routes them appropriately through the Workflow to the annotation stage. As a result, every annotation task should already have pre-existing labels (predictions) included.
💡Hint: If you were to execute this as a python script, you can run it as a command line interface by putting the above code in an
agents.py
file and replacingrunner()with
if __name__ == "__main__": runner.run()Which will allow you set, e.g., the project hash via the command line:
python agent.py --project-hash "..."