Connect a camera to the Dev Board

To perform real-time inferencing with a vision model, you can connect either the Coral Camera or a USB camera to the Dev Board.

Once you connect a camera, try the demo scripts below.

Connect the Coral Camera

The Coral Camera is designed specifically for the Dev Board and connects to the CSI connector on the bottom of the board.

Connect the Coral Camera to the Dev Board as follows:

  1. Make sure the board is powered off and unplugged.
  2. On the bottom of the Dev Board, locate the CSI "Camera Connector" and flip the small black latch so it's facing upward, as shown in figure 1.

    Figure 1. The Dev Board's camera connector with the latch open
  3. Slide the flex cable into the connector with the contact pins facing toward the board (the blue strip is facing away from the board), as shown in figure 2.

  4. Close the black latch.

    Figure 2. The camera cable inserted and the latch closed
  5. Likewise, connect the other end of the flex cable to the matching connector on the camera module.

Power on the board and try the demo scripts below.

Connect a USB camera

Any USB camera that matches the USB UVC standard should be immediately detected by the Dev Board.

Just plug in the camera to the USB-A port. (It's okay if the board is already powered on.)

Then enter the following command to list the device's supported video formats:

v4l2-ctl --list-formats-ext --device /dev/video1

You should see a long list of results that looks something like this:

ioctl: VIDIOC_ENUM_FMT
    Index       : 0
    Type        : Video Capture
    Pixel Format: 'YUYV'
    Name        : YUYV 4:2:2
        Size: Discrete 640x480
            Interval: Discrete 0.033s (30.000 fps)
            Interval: Discrete 0.042s (24.000 fps)
            Interval: Discrete 0.050s (20.000 fps)
            Interval: Discrete 0.067s (15.000 fps)
            Interval: Discrete 0.100s (10.000 fps)
            Interval: Discrete 0.133s (7.500 fps)
            Interval: Discrete 0.200s (5.000 fps)

Take note of the Pixel Format, Size, and FPS values. You'll need to pass those in the demo scripts below, though the default values shown below should work for most cameras.

Note: Be sure that your list includes Pixel Format: 'YUYV'. Currently, YUYV is the only format supported. But the commands below refer to this format with the name YUY2, which is just a different name for the same thing.

Run a demo with the camera

The Mendel system image on the Dev Board includes two demos that perform real-time image classification and object detection with the Edge TPU API.

Before you run either of them, first set this environment variable:

export DEMO_FILES="$HOME/demo_files"

And download the following models (be sure you're connected to the internet):

# The object classification model and labels file
wget -P ${DEMO_FILES}/ https://dl.google.com/coral/canned_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite

wget -P ${DEMO_FILES}/ https://dl.google.com/coral/canned_models/imagenet_labels.txt

# The face detection model (does not require a labels file)
wget -P ${DEMO_FILES}/ https://dl.google.com/coral/canned_models/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite

To run each demo, we've provided two ways to see the video and inference results:

Using an HDMI monitor

The following demos require that you have a monitor connected to the HDMI port on the Dev Board so you can see the video.

Note: By default, the Dev Board is locked at a 1920x1080 output, so your monitor must support this resolution or nothing will appear. If your monitor does not support 1920x1080, you can change the default video output.

Run the object classification model with a monitor

This demo detects 1,000 different objects shown to the camera.

If you're using the Coral Camera:

edgetpu_classify \
--model ${DEMO_FILES}/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
--labels ${DEMO_FILES}/imagenet_labels.txt

If you're using a USB camera:

edgetpu_classify \
--source /dev/video1:YUY2:800x600:24/1 \
--model ${DEMO_FILES}/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
--labels ${DEMO_FILES}/imagenet_labels.txt

In the --source argument (for the USB camera only), you must specify 4 parameters using values printed during the USB camera setup:

  • /dev/video1 is the device file. Yours should be the same if it's the only attached camera.
  • YUY2 is the only supported pixel format (same as YUYV).
  • 800x600 is the image resolution. This must match one of the resolutions listed for your camera.
  • 24/1 is the framerate. It must also match one of the listed FPS values for the given format.

Run the face detection model with a monitor

This demo identifies the location of faces.

If you're using the Coral Camera:

edgetpu_detect \
--model ${DEMO_FILES}/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite

If you're using a USB camera:

edgetpu_detect \
--source /dev/video1:YUY2:800x600:24/1  \
--model ${DEMO_FILES}/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite

See the previous section for details about the --source arguments.

Using a streaming server

These demos require that your Dev Board be network-accessible from another computer (such as when connected to the board shell via MDT) so you can see the camera output in a web browser.

Note: We recommend using Chrome to view the camera streams. Other browsers might not show the image overlays.

Run the object classification model with a streaming server

This demo detects 1,000 different objects shown to the camera.

If you're using the Coral Camera:

edgetpu_classify_server \
--model ${DEMO_FILES}/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
--labels ${DEMO_FILES}/imagenet_labels.txt

If you're using a USB camera:

edgetpu_classify_server \
--source /dev/video1:YUY2:800x600:24/1  \
--model ${DEMO_FILES}/mobilenet_v2_1.0_224_quant_edgetpu.tflite \
--labels ${DEMO_FILES}/imagenet_labels.txt

In the --source argument (for the USB camera only), you must specify 4 parameters using values printed during the USB camera setup:

  • /dev/video1 is the device file. Yours should be the same if it's the only attached camera.
  • YUY2 is the only supported pixel format (same as YUYV).
  • 800x600 is the image resolution. This must match one of the resolutions listed for your camera.
  • 24/1 is the framerate. It must also match one of the listed FPS values for the given format.

With either camera type, you should see the following message:

INFO:edgetpuvision.streaming.server:Listening on ports tcp: 4665, web: 4664, annexb: 4666

Which means your Dev Board is now hosting a streaming server. So from any computer that can access the board, you can view the camera stream at http://<board_ip_address>:4664/. For example, if you're connected to the board shell over USB, then go to http://192.168.100.2:4664/.

Run the face detection model with a streaming server

This demo identifies the location of faces.

If you're using the Coral Camera:

edgetpu_detect_server \
--model ${DEMO_FILES}/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite

If you're using a USB camera:

edgetpu_detect_server \
--source /dev/video1:YUY2:800x600:24/1  \
--model ${DEMO_FILES}/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite

See the previous section for details about the --source arguments.

With either camera type, you should see the following message:

INFO:edgetpuvision.streaming.server:Listening on ports tcp: 4665, web: 4664, annexb: 4666

Which means your Dev Board is now hosting a streaming server. So from any computer that can access the board, you can view the camera stream at http://<board_ip_address>:4664/. For example, if you're connected to the board shell via MDT, then go to http://192.168.100.2:4664/.

Note: We have not yet released an API that performs inferencing directly from a camera. So for now, we've shared code samples on GitHub that perform image classification and object detection with a camera using the GStreamer API. You can download the GStreamer samples here.