Python API overview & demos

The Edge TPU Python library (the edgetpu module) makes it easy to perform an inference with TensorFlow Lite models on an Edge TPU device. It provides simple APIs that perform image classification, object detection, and weight imprinting (transfer-learning) on your device.

If you're using the Dev Board or SoM, this library is included in the Mendel system image. If you're using an accessory device such as the USB Accelerator, you must install the library onto the host computer—only Debian-based operating systems are currently supported (see the setup instructions).

API overview

Key APIs that perform inferencing are the following:

  • ClassificationEngine: Performs image classification. Create an instance by specifying a model, and then pass an image (such as a JPEG) to ClassifyWithImage() and it returns a list of labels and scores.

  • DetectionEngine: Performs object detection. Create an instance by specifying a model, and then pass an image (such as a JPEG) to DetectWithImage() and it returns a list of DetectionCandidate objects, each of which contains a label, a score, and the coordinates of the object.

These inference engines support all image formats supported by Pillow (including JPEG, PNG, BMP), except the files must be in RGB color space (no transparency).

Additionally, we've included an API that performs on-device transfer learning:

  • ImprintingEngine: This implements a transfer-learning technique called imprinting that does not require backward propagation, allowing you to perform model retraining that's accelerated on the Edge TPU (only for image classification models). For more information about this API, read Retrain an image classification model on-device.

The Python library takes care of all the low-level Edge TPU configuration for you. Even if you connect multiple Edge TPUs, the Python library automatically delegates separate models to execute on separate Edge TPUs. For more information about using using multiple models, read Run multiple models with multiple Edge TPUs.

API demos

The following scripts are included with the Edge TPU Python library. You just need need to download a few files required to run them.

Note: If you're using the USB Accelerator, be sure you've first installed the Python library, as per the Get started guide.

Before running the scripts below, download the following compiled models and images (if you're using the Dev Board, run these commands from the board's shell):

cd ~/Downloads

# Download files for classification demo:
curl -O https://dl.google.com/coral/canned_models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
-O https://dl.google.com/coral/canned_models/inat_bird_labels.txt \
-O https://coral.withgoogle.com/static/docs/images/parrot.jpg

# Download files for object detection demo:
curl -O https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
-O https://coral.withgoogle.com/static/docs/images/face.jpg

Now, navigate to the directory with the demos:

# If using the Dev Board:
cd /usr/lib/python3/dist-packages/edgetpu/demo

# If using the USB Accelerator with Debian/Ubuntu:
cd /usr/local/lib/python3.6/dist-packages/edgetpu/demo

# If using the USB Accelerator with Raspberry Pi:
cd /usr/local/lib/python3.5/dist-packages/edgetpu/demo

classify_image.py

This script performs image classification with ClassificationEngine, using the classification model, labels file, and image that you give it.

For example, here's how to perform image classification with the parrot photo in figure 1:

python3 classify_image.py \
--model ~/Downloads/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--label ~/Downloads/inat_bird_labels.txt \
--image ~/Downloads/parrot.jpg
Figure 1. parrot.jpg

You should see results like this:

---------------------------
Ara macao (Scarlet Macaw)
Score :  0.761719

See the classify_image.py source here.

To create your own classification model, read the tutorial about how to Retrain an image classification model.

object_detection.py

This script performs object detection with DetectionEngine, using the detection model, labels file, and image that you give it. If no labels file is given, it defaults to detecting faces.

For example, here's how to perform face detection with the photo in figure 2:

python3 object_detection.py \
--model ~/Downloads/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite \
--input ~/Downloads/face.jpg \
--output ~/detection_results.jpg
Figure 2. face.jpg

You should see results like this:

-----------------------------------------
score =  0.9921875
box =  [[420.00265979766846, 49.222673773765564], [798.6797246932983, 354.40516090393066]]
-----------------------------------------
score =  0.9609375
box =  [[137.51397091150284, 110.30885183811188], [541.597005367279, 435.9664545059204]]
-----------------------------------------
score =  0.91796875
box =  [[834.5663707256317, 215.26478624343872], [1027.9360916614532, 374.5929465293884]]
-----------------------------------------
score =  0.83203125
box =  [[1.7380172908306122, 186.02276635169983], [157.47189247608185, 325.94583439826965]]

The script also creates a copy of the image with box overlays for the detected objects (see figure 3) and saves it at the location specified with the --output parameter.

Figure 3. detection_results.jpg
Help! If you're on a Raspberry Pi and you see an error that says No such file or directory: 'feh', run sudo apt-get install feh and then try again.

See the object_detection.py source here.

To create your own object detection model, read the tutorial about how to Retrain an object detection model.

classify_capture.py (USB Accelerator + Raspberry Pi only)

This script is designed to perform live image classification using the Raspberry Pi camera and the USB Accelerator. (If you have a Dev Board, instead see how to connect a Coral Camera or USB camera.)

Note: This script requires that you connect a Pi Camera and a monitor to your Raspberry Pi so you can see the live classification results.

For this demo, you need a model that can recognize objects readily available for you to put in front of the camera. So we suggest you use the following MobileNet model that can recognize over 1,000 kinds of objects:

cd ~/Downloads

curl -O https://dl.google.com/coral/canned_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
-O https://dl.google.com/coral/canned_models/imagenet_labels.txt

Now go back to the demos directory and run the demo:

cd /usr/local/lib/python3.5/dist-packages/edgetpu/demo

python3 classify_capture.py \
--model ~/Downloads/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
--label ~/Downloads/imagenet_labels.txt

Now start holding some objects up to the camera and you'll see the live classification results on your monitor.

This classify_capture.py sample captures camera images using the picamera API (see the source here).

We also have samples for classification and object detection that instead use the GStreamer API to get camera images, which is compatible with the Raspberry Pi (with Pi Camera) and other Linux systems (with USB camera). You can get the GStreamer samples from GitHub.

To create your own classification model, read the tutorial about how to Retrain an image classification model.