Get started with the USB Accelerator

The Coral USB Accelerator is a USB device that provides an Edge TPU as a coprocessor for your computer. It accelerates inferencing for your machine learning models when attached to a Linux host computer. This page is your guide to get started.

All you need to do is download the Edge TPU runtime and the TensorFlow Lite library on the computer where you'll connect the USB Accelerator. Then we'll show you how to perform image classification with an example app.

If you want to learn more about the hardware, see the USB Accelerator datasheet.

Requirements

To get started, you need a Linux computer with the following specs:

  • x86-64 or ARM64 system architecture
  • One available USB port
  • Debian 6.0 Linux distribution or higher, or any derivative thereof (such as Ubuntu 10.0+)
  • Python 3.5 or higher

This means Raspberry Pi is supported. However, we have only tested Raspberry Pi 3 Model B+ and Raspberry Pi 4.

1. Install the Edge TPU runtime

The Edge TPU runtime is required to communicate with the Edge TPU. You can install it on your host computer from a command line as follows.

First add our Debian package repository to your system:

echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-get update

Then install the Edge TPU runtime:

sudo apt-get install libedgetpu1-std

Now connect the USB Accelerator to your computer using the provided USB 3.0 cable. If you already plugged it in, remove it and replug it so the newly-installed udev rule can take effect.

Note: For the best performance, connect the USB Accelerator to a USB 3.0 port (if available).

Install with maximum operating frequency (optional)

The above command installs the standard Edge TPU runtime, which operates the device at the default clock frequency. You can instead install a runtime version that operates at the maximum clock frequency (2x the default). This increases the inferencing speed but also increases power consumption and causes the USB Accelerator to become very hot.

If you're not certain your application requires increased performance, you should use the default operating frequency. Otherwise, you can install the maximum frequency runtime as follows:

sudo apt-get install libedgetpu1-max

You cannot have both versions of the runtime installed at the same time, but you can switch by simply installing the alternate runtime as shown above.

Caution When operating the device using the maximum clock frequency, the metal on the USB Accelerator can become very hot to the touch. This might cause burn injuries. To avoid injury, either keep the device out of reach when operating it at maximum frequency, or use the default clock frequency.

2. Install the TensorFlow Lite library

There are several ways you can install TensorFlow's APIs, but to get started with Python, the easiest option is to install the tflite_runtime package. This package provides the bare minimum code required to run an inference with Python (primarily, the Interpreter API), thus saving you a lot of disk space.

To install it, follow the TensorFlow Lite Python quickstart, and then return to this page after you run the pip3 install command.

3. Run a model using the TensorFlow Lite API

Now you're ready to run an inference on the Edge TPU. Follow these steps to perform image classification with our example code and model:

  1. Download the example code from GitHub:

    mkdir coral && cd coral
    
    git clone https://github.com/google-coral/tflite.git
  2. Download the bird classifier model, labels file, and a bird photo:

    cd tflite/python/examples/classification
    
    bash install_requirements.sh
  3. Run the image classifier with the bird photo (shown in figure 1):

    python3 classify_image.py \
    --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
    --labels models/inat_bird_labels.txt \
    --input images/parrot.jpg
    
Figure 1. parrot.jpg

You should see results like this:

INFO: Initialized TensorFlow Lite runtime.
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
11.8ms
3.0ms
2.8ms
2.9ms
2.9ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76562

Congrats! You just performed an inference on the Edge TPU using TensorFlow Lite.

To demonstrate varying inference speeds, the example repeats the same inference five times. It prints the time to perform each inference and the top classification (the label ID/name and the confidence score, from 0 to 1.0). Your inference speeds might differ based on your host system and whether you're using a USB 3.0 connection.

The classify_image.py example above uses the TensorFlow Lite Python API. To learn more about how it works, take a look at the classify_image.py source code and read about how to run inference with TensorFlow Lite.

As an alternative to using the TensorFlow Lite API (used above), you can use the Edge TPU Python API, which provides high-level APIs to perform inference with image classification and object detection models with just a few lines of code. For an example, try the other version of classify_image.py using the Edge TPU API.

You can also run inference using C++ and TensorFlow Lite.

Next steps

To run some other types of neural networks, check out our example projects, including examples that perform real-time object detection, pose estimation, keyphrase detection, on-device transfer learning, and more.

If you want to create your own model, see these pages:

Or to create your own model that's compatible with the Edge TPU, read TensorFlow Models on the Edge TPU.