Object detection example

This object_detection.py example performs object detection with the DetectionEngine API, using the given detection model, labels file, and image. If no labels file is given, it defaults to detecting faces (you must be using a face detection model).

The examples below use a MobileNet SSD that's trained to detect either 1,000 different types of objects or just human faces.

Before you begin, you must have already set up your Dev Board or USB Accelerator.

Get the files

Download the files needed for the following examples:

EXAMPLE_DIR=$HOME/coral-examples

mkdir -p $EXAMPLE_DIR && cd $EXAMPLE_DIR

curl -O https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
-O https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
-O https://dl.google.com/coral/canned_models/coco_labels.txt \
-O https://coral.withgoogle.com/static/docs/images/cat.jpg \
-O https://coral.withgoogle.com/static/docs/images/face.jpg

Perform object detection

First, navigate to the directory with the demos:

# If using the Dev Board:
cd /usr/lib/python3/dist-packages/edgetpu/demo

# If using the USB Accelerator with Debian/Ubuntu:
cd /usr/local/lib/python3.6/dist-packages/edgetpu/demo

# If using the USB Accelerator with Raspberry Pi:
cd /usr/local/lib/python3.5/dist-packages/edgetpu/demo

Now execute object_detection.py using the cat photo:

python3 object_detection.py \
--model $EXAMPLE_DIR/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
--label $EXAMPLE_DIR/coco_labels.txt \
--input $EXAMPLE_DIR/cat.jpg \
--output $EXAMPLE_DIR/object_detection_results.jpg
Figure 1. cat.jpg

You should see results like this:

-----------------------------------------
cat
score =  0.87890625
box =  [609.4696044921875, 260.6632471084595, 1023.9177703857422, 999.6587753295898]

By default, you'll see more objects detected, some of which have very low confidence scores. That's because the object_detection.py script has the top_k value set to 10. You can set this lower to receive results for just one or a few of the top results.

The script also creates a copy of the image with box overlays for the detected objects (see figure 2) and saves it at the location specified with the --output parameter.

Figure 2. object_detection_results.jpg
Help! If you're on a Raspberry Pi and you see an error that says No such file or directory: 'feh', run sudo apt-get install feh and then try again.

Perform face detection

Make sure you're in the directory with the demos:

# If using the Dev Board:
cd /usr/lib/python3/dist-packages/edgetpu/demo

# If using the USB Accelerator with Debian/Ubuntu:
cd /usr/local/lib/python3.6/dist-packages/edgetpu/demo

# If using the USB Accelerator with Raspberry Pi:
cd /usr/local/lib/python3.5/dist-packages/edgetpu/demo

Now execute object_detection.py with a face detection model (and no labels file) using the photo with faces:

python3 object_detection.py \
--model $EXAMPLE_DIR/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
--input $EXAMPLE_DIR/face.jpg \
--output $EXAMPLE_DIR/face_detection_results.jpg
Figure 3. face.jpg

You should see results like this:

-----------------------------------------
score =  0.9921875
box =  [[420.00265979766846, 49.222673773765564], [798.6797246932983, 354.40516090393066]]
-----------------------------------------
score =  0.9609375
box =  [[137.51397091150284, 110.30885183811188], [541.597005367279, 435.9664545059204]]
-----------------------------------------
score =  0.91796875
box =  [[834.5663707256317, 215.26478624343872], [1027.9360916614532, 374.5929465293884]]
-----------------------------------------
score =  0.83203125
box =  [[1.7380172908306122, 186.02276635169983], [157.47189247608185, 325.94583439826965]]
Figure 4. face_detection_results.jpg

See the object_detection.py source here.

To create your own object detection model, read the tutorial about how to Retrain an object detection model.

cat.jpg is licensed under Creative Commons by Scott Main.
face.jpg is licensed under Creative Commons by Humberto Moreno.