The Edge TPU Python library (the
edgetpu module) makes it easy to perform an inference with
TensorFlow Lite models on an Edge TPU device. It provides simple APIs that perform image
classification and object detection, plus on-device transfer-learning with either weight imprinting
or backpropagation. You can use all these features without using any TensorFlow APIs—all you need is
a compiled TensorFlow Lite model and the Edge TPU Python library.
However, if you prefer to use the TensorFlow Lite Python API to perform inference, you can still benefit from acceleration on the Edge TPU—to learn how, instead read Run inference with TensorFlow Lite in Python.
If you just want to see some code, check out the Examples page.
Install the library and examples
To get started, you need to install the latest Edge TPU Python library (the
If you're using the Dev Board or SoM, this library is included in the Mendel system image (just be sure you've updated to the latest software).
If you're using an accessory device such as the USB Accelerator, you can install the Edge TPU Python library as follows:
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - sudo apt-get update sudo apt-get install python3-edgetpu
And download the example code:
sudo apt-get install edgetpu-examples
The examples are saved at
Edge TPU API overview
Key APIs in the
edgetpu module that perform inferencing are the following:
ClassificationEngine: Performs image classification. Create an instance by specifying a model, and then pass an image (such as a JPEG) to
ClassifyWithImage()and it returns a list of labels and scores.
DetectionEngine: Performs object detection. Create an instance by specifying a model, and then pass an image (such as a JPEG) to
DetectWithImage()and it returns a list of
DetectionCandidateobjects, each of which contains a label, a score, and the coordinates of the object.
Both engines are subclasses of
you can use to perform different types of inferencing. They both also support all image formats
supported by Pillow (including JPEG, PNG, BMP), except the files
must be in RGB color space (no transparency).
Additionally, we've included two APIs that perform on-device transfer learning (for classification models only):
ImprintingEngine: This implements a transfer-learning technique called weight imprinting that does not require backward propagation. It allows you to teach the model new classifications with very small sample sizes (literally just a few images). For more information, read Retrain an image classification model on-device.
SoftmaxRegression: This offers an abbreviated version of traditional backpropagation—it updates only the fully-connected layer at the end of the graph with new weights. It requires large datasets for training, but is still very fast and may result in more accurate models when the dataset has high intra-class variance. For more information, read Retrain an image classification model on-device.
The Python library takes care of all the low-level Edge TPU configuration for you. Even if you connect multiple Edge TPUs, the Python library automatically delegates separate models to execute on separate Edge TPUs. For more information about using using multiple models, read Run multiple models with multiple Edge TPUs.
For example code using the Edge TPU Python API, see our Examples page.
Is this content helpful?