Mini PCIe / M.2 Accelerator
Integrate the Edge TPU into legacy and new systems using a Mini PCIe or M.2 interface.
Performs high-speed ML inferencing
The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 100+ FPS, in a power efficient manner. See more performance benchmarks.
Works with Debian Linux
Integrates with any Debian-based Linux system that's compatible with either the half-size Mini PCIe form factor or the M.2 form factor (with A/E keys).
Supports TensorFlow Lite
No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU.
Supports AutoML Vision Edge
Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge.
|ML accelerator||Google Edge TPU coprocessor|
|Connector||Mini PCIe or M.2 (A/E)|
|Dimensions||30 mm x 27 mm (Mini PCIe) and 22 mm x 30 mm (M.2)|