Object Detection SDK Tutorial
  • Dark
    Light
  • PDF

Object Detection SDK Tutorial

  • Dark
    Light
  • PDF

In this tutorial, we will learn how to use Object Detection in Dataloop's Python SDK from collecting data, overviewing, and all the way to labeling.

Let’s say you wish to detect a dog in an image and draw a box around it with an assigned label “Dog”.

object detection.png

You should:

1.collect many pictures of dogs.
2.overview them, organize them into folders and discard non-relevant photos, such as cat photos.
3.mark each photo with an annotation according to a “dog” label.

Now let’s see how we can do that with Dataloop.

Before you begin
  • Python needs to be installed on your system using this official website. See our Installation Instructions to verify you have the required Python version The SDK supports.
  • For a python tutorial click here.
  • To do this tutorial you must download an environment that supports python.
    In this tutorial, we will use PyCharm

First, we should install the “dtlpy” package with the following shell command:

pip install dtlpy

object detection 1.gif

Then, import the package to your python environment :

import dtlpy as dl

object detection 2.gif

After that, login to the platform:

dl.login()
Once you run the code the browser will open up with the "login" screen

object detection 3.gif

Highlighted text requires your input -

Code that is marked in blue indicates an area that requires your own input

Now, for managing our dog photos we need to create storage management tools.
We will start with creating a project and adding a dataset inside it, that will hold all of our photos.
Create a project:

project = dl.projects.create(project_name='object-detection')

object detection 4.png

With a dataset inside it:

dataset = project.datasets.create(dataset_name='dogs')

object detection 5.gif

To add a “dog” definition to our data, set a “dog” label in the recipes, which is a dataset list of instructions.
Define the label with your choice of color in RGB form:

labels = [{'tag': 'dog', 'color': (1, 1, 1)}]

Add the label to the Recipe:

dataset.add_labels(label_list=labels)

object detection 6.gif

Please note that directory paths look different in OS and in Linux, and does not require an "r" at the beginning

Upload all of your photos as items to your “dogs” dataset.
Define your local path directory (local folder path) and upload the items to a folder in your “dog” dataset:

dataset.items.upload(local_path=r'/home/project/dog-images',remote_path='/dog-folder')

object detection 7.gif

object detection 8.gif

To manage and overview your photos, use filters to prioritize items for annotations and make sure there aren’t any cat photos slipping by.

Create Filters instance and add a filter of filenames that include “dog”

filters = dl.Filters(field='filename', values='/dog.jpg')

Send out an annotation task for items labeling to your dog enthusiasts, based on your filters.

Before that, add them as contributors to your project:

You can instead run an object detection model on datasets, using YOLO and OpenCV at the end of the tutorial.

project.add_member(email='puppy@dataloop.ai', role='annotator')

object detection 9.PNG

 Create and send the task to your annotator:
import datetime
task = dataset.tasks.create(task_name='dog_task',assignee_ids=['puppy@dataloop.ai'],filters=filters,due_date=datetime.datetime(day=1, month=1, year=2029).timestamp())

Now, enter the platform with the user to which you sent the task.
See the task as an assignment on your assignments page and begin the annotation process.

object detection 10.gif

Object Detection Model

You will use and upload object detection model results in the Dataloop platform using YOLO and OpenCV.
This model will automatically add annotations based on your dog photos image.png

Try it out in a new project and dataset, as this model requires custom labels 

Before You Begin

This example code requires some other files in order to run, so before you test out this example, please download the files from the following link and extract the Zip file: https://storage.googleapis.com/dtlpy/model_assets/yolo-coco/yolo-coco.zip

Imports - Copy and paste the following code.

import numpy as np
import cv2
import os
import dtlpy as dl

Create your project & dataset to use for this tutorial.

Name it as you like

project = dl.projects.create(project_name='project_puppy')
dataset = project.datasets.create(dataset_name='dataset_puppy')

Define Variables

In the below code you will need to update two elements: your local path for items to upload and annotate. This should be placed in the images_dir

And the local directory of the yolo-coco repository you downloaded-
For the YOLO repository, use the unzipped file downloaded at the beginning of this tutorial and locate the following 3 files: yolov3.weights, yolov3.cfg, coco.names. Place the local path in yolo_dir

threshold = 0.3
confidence_rate = 0.5
# local directory of images to detect
images_dir = r'.../dogimages'
# local directory of yolo-coco repository
yolo_dir = r'.../yolo-coco'
weightsPath = os.path.sep.join([yolo_dir, "yolov3.weights"])
configPath = os.path.sep.join([yolo_dir, "yolov3.cfg"])
labelsPath = os.path.sep.join([yolo_dir, "coco.names"])
images = os.listdir(images_dir)

Copy and paste the following code:

LABELS = open(labelsPath).read().strip().split("\n")
# upload labels to dataset
platform_labels = [dl.Label(tag=label) for label in LABELS]
dataset.add_labels(platform_labels)

Copy and paste the following code:

net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)
for image in images:
    image_path = os.path.join(images_dir, image)
    item = dataset.items.upload(local_path=image_path)
    assert isinstance(item, dl.Item)
    builder = item.annotations.builder()
    image = cv2.imread(image_path)
    (H, W) = image.shape[:2]
    ln = net.getLayerNames()
    ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
    blob = cv2.dnn.blobFromImage(image, 1 / 255.0, (416, 416),
                                 swapRB=True, crop=False)
    net.setInput(blob)
    layerOutputs = net.forward(ln)
    boxes = list()
    confidences = list()
    classIDs = list()
    for output in layerOutputs:
        for detection in output:
            scores = detection[5:]
            classID = np.argmax(scores)
            confidence = scores[classID]
            if confidence > confidence_rate:
                box = detection[0:4] * np.array([W, H, W, H])
                (centerX, centerY, width, height) = box.astype("int")
                top = int(centerY - (height / 2))
                bottom = top + height
                left = int(centerX - (width / 2))
                right = left + width
               # add annotation create annotation
                builder.add(
                    annotation_definition=dl.Box(top=top,
                                                 right=right,
                                                 left=left,
                                                 bottom=bottom,
                                                 label=LABELS[classID]))
    # upload annotations to item
    builder.upload()

Now click Enter the platform and view your annotated items.

object detection 11.gif


What's Next