TensorFlow
The TensorFlow image processing integrationIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] allows you to detect and recognize objects in a camera image using TensorFlowsummary
attribute along with quantity. The matches
attribute provides the confidence score
for recognition and the bounding box
of the object for each detection category.
This integration is only available on Home Assistant Core installation types. Unfortunately, it cannot be used with Home Assistant OS, Supervised or Container.
Prerequisites
The following packages must be installed on Debian before following the setup for the integration to work:
sudo apt-get install libatlas-base-dev libopenjp2-7 libtiff5
It is possible that Home Assistant is unable to install the Python TensorFlow bindings. If that is the case,
you’ll need to install those manually using: pip install tensorflow==2.2.0
, as the Python wheel is
not available for all platforms.
See the official install guide
Furthermore, the official Python TensorFlow wheels by Google, require your CPU to support the avx
extension.
If your CPU lacks those capabilities, Home Assistant will crash when using TensorFlow, without any message.
Preparation
This integration requires files to be downloaded, compiled on your computer, and added to the Home Assistant configuration directory. These steps can be performed by cloning this repository
Create the following folder structure in your configuration directory.
|- {config_dir}
|- tensorflow/
|- models/
Follow these steps (Linux) to compile the object detection library.
# Clone tensorflow/models
git clone https://github.com/tensorflow/models.git
# Compile Protobuf (apt-get install protobuf-compiler)
cd models/research
protoc object_detection/protos/*.proto --python_out=.
# Copy object_detection to {config_dir}
cp -r object_detection {config_dir}/tensorflow
Your final folder structure should look as follows
|- {config_dir}
|- tensorflow/
|- models/
|- object_detection/
|- ...
Model Selection
Lastly, it is time to pick a model. It is recommended to start with one of the COCO models available in the Model Detection Zoo
The trade-off between the different models is accuracy vs speed. Users with a decent CPU should start with one of the EfficientDet
models. If you are running on an ARM device like a Raspberry Pi, start with the SSD MobileNet v2 320x320
model.
Whichever model you choose, download it and extract in to the tensorflow/models
folder in your configuration directory.
Configuration
To enable this integrationIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] in your installation, add the following to your configuration.yaml
The configuration.yaml file is the main configuration file for Home Assistant. It lists the integrations to be loaded and their specific configurations. In some cases, the configuration needs to be edited manually directly in the configuration.yaml file. Most integrations can be configured in the UI. [Learn more] file.
After changing the configuration.yaml
The configuration.yaml file is the main configuration file for Home Assistant. It lists the integrations to be loaded and their specific configurations. In some cases, the configuration needs to be edited manually directly in the configuration.yaml file. Most integrations can be configured in the UI. [Learn more] file, restart Home Assistant to apply the changes.
# Example configuration.yaml entry
image_processing:
- platform: tensorflow
source:
- entity_id: camera.local_file
model:
graph: /config/tensorflow/models/efficientdet_d0_coco17_tpu-32/
Configuration Variables
The list of image sources.
A template for the integration to save processed images including bounding boxes. camera_entity
is available as the entity_id
string of the triggered source camera.
Information about the TensorFlow model.
Full path to a *label_map.pbtext
.
tensorflow/object_detection/data/mscoco_label_map.pbtxt
Offset for mapping label ID to a name (only use for custom models)
Full path to TensorFlow models directory.
/tensorflow
inside configuration
Custom detection area. Only objects fully in this box will be reported. Top of image is 0, bottom is 1. Same left to right.
categories
can also be defined as dictionary providing an area
for each category as seen in the advanced configuration below:
# Example advanced configuration.yaml entry
image_processing:
- platform: tensorflow
source:
- entity_id: camera.driveway
- entity_id: camera.backyard
file_out:
- "/tmp/{{ camera_entity.split('.')[1] }}_latest.jpg"
- "/tmp/{{ camera_entity.split('.')[1] }}_{{ now().strftime('%Y%m%d_%H%M%S') }}.jpg"
model:
graph: /config/tensorflow/models/efficientdet_d0_coco17_tpu-32/
categories:
- category: person
area:
# Exclude top 10% of image
top: 0.1
# Exclude right 15% of image
right: 0.85
- car
- truck
Optimizing resources
Image processing components process the image from a camera at a fixed period given by the scan_interval
. This leads to excessive processing if the image on the camera hasn’t changed, as the default scan_interval
is 10 seconds. You can override this by adding to your configuration scan_interval: 10000
(setting the interval to 10,000 seconds), and then call the image_processing.scan
action when you actually want to perform processing.
# Example advanced configuration.yaml entry
image_processing:
- platform: tensorflow
scan_interval: 10000
source:
- entity_id: camera.driveway
- entity_id: camera.backyard
# Example advanced automations.yaml entry
- alias: "TensorFlow scanning"
triggers:
- trigger: state
entity_id:
- binary_sensor.driveway
actions:
- action: image_processing.scan
target:
entity_id: camera.driveway