If you know anything about me then you'll probably know of my love for home automation. I use Home Assistant to manage the various automations, devices and integrations. Home Assistant is, more or less, an aggregator of many different types of sensors, devices, APIs, etc. Therefore its functionality is directly enhanced and expanded by adding new integrations and devices to your home/network. One of the many integrations that is compatible with Home Assistant is Frigate NVR.
In this post we will explore how to break out the YAML configuration into multiple files for ease of maintenance as well as how to get the configuration into Github and make use of Github Actions to deploy it via Rancher (Kubernetes). This will remove the need to continue to upload edited configurations to Frigate's container volume as well as give us a change history.
What is an NVR?
NVR stands for "Network Video Recorder". An NVR is simply a system that manages cameras and can usually control recording, snapshots, alerts, etc. The NVR has knowledge of the camera state and live feed.
Frigate is a self-hosted NVR designed for AI object detection. This is a really cool project and offers a very wide array of automation ideas within Home Assistant.
Configuring Frigate
If you've used Frigate before, you'll know that it is configured with a YAML file. This is awesome because YAML is well documented and easy to work with. This starts to become a headache (at least for me) when the configuration grows to be very large. Now you end up with this massive YAML file that is difficult to navigate, difficult to debug, and overall not very enjoyable to manage.
Unfortunately Frigate doesn't offer any other type of configuration (such as database configuration). Honestly though, this is okay. YAML is powerful and simple, and more often than not it "just works" (unless it's blatantly misconfigured). So the question becomes: "how do we simplify the configuration of this application?"
Breaking down the config
The first thing we need to do is "break down" the configuration into discrete parts. This will inform us on how to build the pipeline (e.g. which pieces of the config need to be built) as well as simplify editing.
I identified 4 main components within my configuration:
- MQTT - the configuration to integrate with Home Assistant
- Detectors - defines the detection hardware (CPU/GPU/TPU)
- Objects - defines objects and settings for detection
- Cameras - defines each camera's configuration
In my case, cameras is definitely the largest config. Objects could get to be larger as I add different detection settings, but for the most part the other three will remain unchanged.
I also wish to omit sensitive information from my configuration since it will be pushed to Github. Although the repository for my specific configuration is not public, it is still a good idea to prevent sensitive data from ending up in action logs or in the commit history. For that reason, I came up with a very simple "variable replacement" syntax: "{{ ENV_VAR }}"
. Then I was able to break out the MQTT config for example:
mqtt:
host: "{{ MQTT_HOST }}"
user: "{{ MQTT_USER }}"
password: "{{ MQTT_PASSWORD }}"
Here is an example of detectors.yml
which has no variables:
detectors:
cpu1:
type: cpu
Break out each piece of the configuration except for cameras into their own separate YAML:
detectors.yml
mqtt.yml
objects.yml
Configuring Cameras
The bulk of your configuration will be around cameras. Once your list of cameras gets to be sufficiently large, it can be a pain to add a new camera quickly. Additionally, many of my cameras are identical and therefore share the same configuration. Copying and pasting this for additional cameras just clutters up the configuration, in my opinion.
We will break out each camera configuration into it's own YAML file. Additionally, we will add "camera templates" which will control the bulk of the configuration for a camera.
First, let's talk about the camera config syntax. In previous config breakouts (like outputs, mqtt, and detectors), the config syntax is identical to Frigate's documentation. However for cameras, the syntax changes.
camera:
name: amcrest1 # the camera name
template: some_camera # the template name
rtsp_url: "{{ RTSP_URL }}" # the camera RTSP url added via env var
frigate:
# additional frigate-syntax config
Every camera YAML must be keyed with camera
and contain the following:
name
- the name of the camera that will be assigned via Frigatetemplate
(optional) - the template the camera uses. Leave blank to manually configure.rtsp_url
(optional) - the RTSP URL that will be added to a camera template. Omit for manual configurationfrigate
(optional) - additional configuration for the camera, specified in Frigate syntax.
For example if you are using a templated camera and want to add zones or motion masks:
camera:
name: amcrest1 # the camera name
template: some_camera # the template name
rtsp_url: "{{ RTSP_URL }}" # the camera RTSP url added via env var
frigate:
zones:
zone_1:
coordinates: ...
motion:
mask:
- ...
If the camera is being manually configured, then this YAML file should use Frigate syntax.
Camera Templates
Camera templates are useful if you have many of the same model of camera, which I do. For example I use several Amcrest IP5M-T1179EW cameras, so I will create a template called amcrest-IP5M-T1179EW.yml
:
camera:
ffmpeg:
inputs:
- path:
roles:
- detect
- record
rtmp:
enabled: False
record:
enabled: True
snapshots:
enabled: True
detect:
width: 1920
height: 1080
You'll notice the path
is empty. This is because it will be replaced with the rtsp_url
config option from the camera config.
Building the Configuration
Building the configuration file is done using just two Python scripts. Using PyYAML we're able to quickly load in our templated configurations, combine them, interpolate environment variables, then output a built Frigate configuration.
Let's first look at a helper script called build_config.py
. This script will interpolate environment variables for a given YAML dictionary:
import re
import os
def build(loaded_yaml):
built = {}
for key in loaded_yaml:
built[key] = __parse(loaded_yaml[key])
return built
def __parse(value):
if isinstance(value, dict):
rebuild = {}
for key in value:
rebuild[key] = __parse(value[key])
return rebuild
if isinstance(value, list):
rebuild = []
for element in value:
rebuild.append(__parse(element))
return rebuild
if not isinstance(value, str):
return value
match = re.findall("{{ ([a-zA-Z0-9_]*) }}", value)
if not match:
return value
for var in match:
env = os.getenv(var)
if not env:
continue
value = value.replace('{{ ' + var + ' }}', env)
return value
This is a relatively simple algorithm where the YAML object is passed into build
and each value is parsed. The __parse
function will check if the value is a dict or list and loop back over __parse
accordingly. Otherwise it will attempt to replace the matched environment variable name with the value from the actual host environment. If no value exists in the host env, then the original value is returned.
This allows us to use the variable syntax as specified above anywhere in YAML. The build_config
script does most of the heavy lifting of creating a valid configuration. Next, we will use the compile.py
script to perform the actual build process:
import yaml
import os
from lib import build_config
import copy
import json
MQTT_CONFIG = "./mqtt.yml"
DETECTORS_CONFIG = "./detectors.yml"
OBJECTS_CONFIG = "./objects.yml"
CAMERA_CONFIGS = "./cameras"
TEMPLATE_CONFIGS = "./templates"
output = {}
with open(MQTT_CONFIG, 'r') as mqtt_config_file:
loaded = yaml.safe_load(mqtt_config_file)['mqtt']
output['mqtt'] = build_config.build(loaded)
mqtt_config_file.close()
print("✅ Loaded MQTT config")
with open(DETECTORS_CONFIG, 'r') as detectors_config_file:
loaded = yaml.safe_load(detectors_config_file)['detectors']
output['detectors'] = build_config.build(loaded)
detectors_config_file.close()
print("✅ Loaded Detectors config")
with open(OBJECTS_CONFIG, 'r') as objects_config_file:
loaded = yaml.safe_load(objects_config_file)['objects']
output['objects'] = build_config.build(loaded)
objects_config_file.close()
print("✅ Loaded Objects config")
template_config_listing = os.listdir(TEMPLATE_CONFIGS)
templates = {}
print('Loading templates:')
for template_filename in template_config_listing:
if not os.path.isfile(os.path.join(TEMPLATE_CONFIGS, template_filename)) or not (template_filename.endswith('.yml') or template_filename.endswith('.yaml')):
continue
with open(os.path.join(TEMPLATE_CONFIGS, template_filename)) as file:
loaded = yaml.safe_load(file)['camera']
file.close()
template_name = template_filename.replace('.yml', '').replace('.yaml', '')
print(f"✅ loaded camera template {template_name}")
templates[template_name] = loaded
print('--- --- --- --- ---')
print('Loading cameras:')
camera_config_listing = os.listdir(CAMERA_CONFIGS)
cameras = {}
for camera_filename in camera_config_listing:
if not os.path.isfile(os.path.join(CAMERA_CONFIGS, camera_filename)) or not (camera_filename.endswith('.yml') or camera_filename.endswith('yaml')):
continue
with open(os.path.join(CAMERA_CONFIGS, camera_filename)) as file:
loaded = yaml.safe_load(file)['camera']
file.close()
camera_name = loaded['name']
if 'template' in loaded:
print(f"Camera {camera_name} uses template -> {loaded['template']}")
if not loaded['template'] in templates:
print(f"⚠️ Template {loaded['template']} is not valid. Skipping camera.")
continue
template = copy.deepcopy(templates)[loaded['template']]
template['ffmpeg']['inputs'][0]['path'] = loaded['rtsp_url']
if 'frigate' in loaded:
frigate_config = loaded['frigate']
template.update(frigate_config)
cameras[camera_name] = build_config.build(template)
else:
cameras[camera_name] = build_config.build(loaded)
print(f"✅ Loaded camera {camera_name}")
output['cameras'] = cameras
with open('output', 'w+') as file:
file.write(json.dumps(yaml.dump(output)))
file.close()
This script will basically build the MQTT, detectors, and objects config, then build each camera configuration according to it's template or manual configuration. The output
dict will hold the YAML configuration. You'll notice we're using json.dumps
when dumping the output to a file. This is due to restrictions in Rancher's CLI - nothing more. We just need to have a valid json-formatted string in order to patch the config map resource on Rancher. More on this shortly.
Deploying to Rancher
I use Rancher/Kubernetes for my in-home container management. Rancher offers a nice UI as well as fully featured API to interact both with Rancher-specific resources as well as kubectl
to interact directly with Kubernetes resources.
I am storing my Frigate config.yaml
as a Config Map resource. This config map is then mounted into the Frigate container at /config
- where Frigate looks for its config YAML. Therefore the only thing that needs to be done to "deploy" the configuration is to patch the config map with the new YAML data.
Rancher CLI
These operations are all done with Rancher CLI. The complete example of this post is available on Github and you will notice there is a .github/scripts/rancher/install.sh
. This downloads and installs the CLI during Github actions. It will also login using the given env vars in the install.sh
script.
First, we need to generate the Rancher-compatible YAML output:
python3 compile.py # dumps data to `output` file
YAML=$(cat ./output) # reads the data into the $YAML var
Next, just patch the config map using kubectl
:
./rancher kubectl --insecure-skip-tls-verify \
--namespace=<NAMESPACE> patch ConfigMap/<CONFIG MAP NAME> \
-o yaml --patch "{\"data\":{\"config.yaml\":$YAML}}"
Note: you may need --insecure-skip-tls-verify
if you are using a self-signed SSL certificate.
Replace <NAMESPACE>
with the namespace your config map is located in. Replace <CONFIG MAP NAME>
with the name of your config map.
Bam, done! Just like that, your config.yaml
map is updated. You will need to restart the Frigate container to apply these changes.
Practical look at configurations
Let's take a look at a more practical configuration setup. This is a sample of my own setup, obviously with several sensitive pieces of data changed.
# mqtt.yml
mqtt:
host: "{{ MQTT_HOST }}"
user: "{{ MQTT_USER }}"
password: "{{ MQTT_PASSWORD }}"
# detectors.yml
detectors:
cpu1:
type: cpu
# objects.yml
objects:
track:
- person
- dog
- cat
- car
filters:
person:
min_area: 5000
max_area: 100000
min_score: 0.5
threshold: 0.7
# templates/amcrestIP5M-T1179EW.yml
camera:
ffmpeg:
inputs:
- path:
roles:
- detect
- record
rtmp:
enabled: False
record:
enabled: True
snapshots:
enabled: True
detect:
width: 1920
height: 1080
# cameras/amcrest1.yml
camera:
name: amcrest1
template: amcrestIP5M-T1179EW
rtsp_url: "{{ AMCREST1_RTSP }}"
frigate:
zones:
patio:
coordinates: 997,596,1084,534,1181,523,1264,557,1346,566,1204,724,833,684
back_door:
coordinates: 1331,602,1503,491,1489,674,1405,673,1346,685,1231,675,1271,626
motion:
mask:
- 587,202,732,206,802,245,942,231,915,428,482,445,488,241 # Tree line
In order to build this config, the following env vars must be present:
MQTT_HOST
MQTT_USER
MQTT_PASSWORD
AMCREST1_RTSP
To add a camera to this configuration, simply add a new file to the cameras/
directory. The file must be .yml
or .yaml
extension.
Conclusion
I have been using this setup for a couple days now and even added a camera with it already. I really like the ability to view change history in Github as well as the auto deployment of the config map. The only thing this may be missing is the ability to automatically restart the Frigate container. This isn't a difficult thing to add in, so I may add it in the near future.
If you're interested in copying this project for your own Frigate setup, you can find an example project here: @nwilging/frigate-config-example.
Happy monitoring!