Edge TPU

Google’s purpose-built ASIC designed to run inference at the edge.

AI at the edge

AI at the edge

AI is pervasive today, from consumer to enterprise applications. With the explosive growth of connected devices, combined with a demand for privacy/confidentiality, low latency and bandwidth constraints, AI models trained in the cloud increasingly need to be run at the edge. Edge TPU is Google’s purpose-built ASIC designed to run AI at the edge. It delivers high performance in a small physical and power footprint, enabling the deployment of high-accuracy AI at the edge.

End-to-end AI infrastructure

End-to-end AI infrastructure

Edge TPU complements Cloud TPU and Google Cloud services to provide an end-to-end, cloud-to-edge, hardware + software infrastructure for facilitating the deployment of customers' AI-based solutions.

High performance in a small physical and power footprint

High performance in a small physical and power footprint

Thanks to its performance, small footprint, and low power, Edge TPU enables the broad deployment of high-quality AI at the edge.

Co-design of AI hardware, software and algorithms

Co-design of AI hardware, software and algorithms

Edge TPU isn't just a hardware solution, it combines custom hardware, open software, and state-of-the-art AI algorithms to provide high-quality, easy to deploy AI solutions for the edge.

A broad range of applications

A broad range of applications

Edge TPU can be used for a growing number of industrial use-cases such as predictive maintenance, anomaly detection, machine vision, robotics, voice recognition, and many more. It can be used in manufacturing, on-premise, healthcare, retail, smart spaces, transportation, etc.

An open, end-to-end infrastructure for deploying AI solutions

The Edge TPU allows you to deploy high-quality ML inferencing at the edge, using various prototyping and production products from Coral.

The Coral platform for ML at the edge augments Google's Cloud TPU and Cloud IoT to provide an end-to-end (cloud-to-edge, hardware + software) infrastructure to facilitate the deployment of customers' AI-based solutions. In addition to its open-source TensorFlow Lite programming environment, the Coral platform provides a complete developer toolkit so you can compile your own models or retrain several Google AI models for the Edge TPU, combining Google's expertise in both AI and hardware.

Edge TPU complements CPUs, GPUs, FPGAs, and other ASIC solutions for running AI at the edge.

Edge
(Devices/nodes, Gateways, Servers)
Google Cloud
Tasks ML inference ML training and inference
Software, services Linux, Windows
AI Platform, Kubernetes Engine,
Compute Engine, Cloud IoT Core
ML frameworks TensorFlow Lite, NN API
TensorFlow, scikit-learn,
XGBoost, Keras
Hardware accelerators Edge TPU, GPU, CPU Cloud TPU, GPU, and CPU

Edge TPU features

This ASIC is the first step in a roadmap that will leverage Google's AI expertise to follow and reflect in hardware the rapid evolution of AI.

Type Inference Accelerator
Performance Example Edge TPU enables users to execute state-of-the-art mobile vision models such as MobileNet v2 at nearly 400 FPS, in a power efficient manner. See model benchmarks.
Numerics Int8
IO Interface PCIe, USB
Google Cloud

Get started

Build with Edge TPU using the development board, which includes an Edge TPU SoM and a carrier board.

Learn about Edge TPU products from Coral  
Clould IOT Edge

Products listed on this page are in beta. For more information on our product launch stages, see here.