Compare commits

...

62 Commits

Author SHA1 Message Date
Sebastian Goscik
a8328fd09e Bump version: 0.10.7 → 0.11.0 2024-06-08 01:31:58 +01:00
Sebastian Goscik
28d241610b changelog 2024-06-08 01:31:21 +01:00
Sebastian Goscik
aa1335e73b Fix typos and add experimental downloader to README 2024-06-08 01:29:06 +01:00
Sebastian Goscik
9cb2ccf8b2 Update pyunifiprotect to point to my fork
This is done to accept in features that have not been merged into the upstream repo yet. This also allows for stability in the future.
2024-06-08 01:18:14 +01:00
Sebastian Goscik
30ea7de5c2 Add experimental downloader
This uses a new API to download events like the way the web ui does, where it first asks for a video to be prepared (on the unifi protect host) and then downloads it. This might be potentially more stable than the existing downloader.
2024-06-06 00:41:42 +01:00
Sebastian Goscik
2dac2cee23 TEMP: Switch to fork of pyunifiprotect
In order to test new functionality of a PR this commit temporarily changes the source of pyunifiprotect
2024-06-06 00:41:42 +01:00
Sebastian Goscik
f4d992838a Fix permissions issue with ufp/sessions.json in docker container
The python library `platformdirs` is detecting the user as root instead of the uid being set to execute UPB. This work around forces the session cache file to be placed in /config
2024-06-06 00:41:20 +01:00
Sebastian Goscik
9fe4394ee4 bump pyunifiprotect to 6.0.1 2024-05-27 23:05:19 +01:00
Sebastian Goscik
e65d8dde6c Bump version: 0.10.6 → 0.10.7 2024-03-23 00:18:57 +00:00
Sebastian Goscik
90108edeb8 Force using pyunifiprotect >= 5.0.1 2024-03-23 00:18:49 +00:00
Sebastian Goscik
1194e957a5 Bump version: 0.10.5 → 0.10.6 2024-03-22 22:50:20 +00:00
Sebastian Goscik
65128b35dd changelog 2024-03-22 22:50:14 +00:00
mmolitor87
64bb353f67 Bump pyunifiprotect to support protect 3.0.22 (#133) 2024-03-22 22:47:54 +00:00
Adrian Keenan
558859dd72 Update docs for ignoring cameras (#134)
* update docs

* remove docker from log scanning notes
2024-03-21 23:09:09 +00:00
Sebastian Goscik
d3b40b443a Bump version: 0.10.4 → 0.10.5 2024-02-24 16:19:22 +00:00
Sebastian Goscik
4bfe9afc10 Bump pyunifiprotect 2024-02-24 16:19:11 +00:00
Sebastian Goscik
c69a3e365a Bump version: 0.10.3 → 0.10.4 2024-01-26 19:49:36 +00:00
Sebastian Goscik
ace6a09bba changelong 2024-01-26 19:49:32 +00:00
Sebastian Goscik
e3c00e3dfa Update pyunifiprotect version 2024-01-26 19:47:44 +00:00
Sebastian Goscik
5f7fad72d5 Bump version: 0.10.2 → 0.10.3 2023-12-07 19:59:13 +00:00
Sebastian Goscik
991998aa37 changelog 2023-12-07 19:59:10 +00:00
Sebastian Goscik
074f5b372c bump pyunifiprotect version 2023-12-07 19:57:21 +00:00
Sebastian Goscik
00aec23805 Bump version: 0.10.1 → 0.10.2 2023-11-21 00:20:46 +00:00
Sebastian Goscik
52e4ecd50d changelog 2023-11-21 00:20:35 +00:00
Sebastian Goscik
6b116ab93b Fixed issue where duplicate events were being downloaded
Previously unifi would only end one update which contained the end time stamp
so it was sufficient to check if it existed in the new event data.
However, now it is possible to get update events after the end timestamp
has been set. With this change we now look for when the event change
data contains the end time stamp. So long as unifi does not change its
mind about when an event ends, this should solve the issue.
2023-11-21 00:18:36 +00:00
Sebastian Goscik
70526b2f49 Make default file path format use event start time 2023-11-21 00:08:24 +00:00
Sebastian Goscik
5069d28f0d Bump version: 0.10.0 → 0.10.1 2023-11-01 21:34:01 +00:00
Sebastian Goscik
731ab1081d changelog 2023-11-01 21:33:55 +00:00
Sebastian Goscik
701fd9b0a8 Fix event enum string conversion to value 2023-11-01 21:32:19 +00:00
Sebastian Goscik
5fa202005b Bump version: 0.9.5 → 0.10.0 2023-11-01 00:16:17 +00:00
Sebastian Goscik
3644ad3754 changelog 2023-11-01 00:15:54 +00:00
Sebastian Goscik
9410051ab9 Add feature to skip events longer than a maximum length 2023-11-01 00:11:49 +00:00
Sebastian Goscik
d5a74f475a failed rcat no longer writes to database 2023-10-31 23:37:52 +00:00
Sebastian Goscik
dc8473cc3d Fix bug with event chunking during initial ignore of events 2023-10-31 17:47:59 +00:00
Sebastian Goscik
60901e9a84 Fix crash caused by no events occurring in retention interval 2023-10-31 17:35:30 +00:00
Sebastian Goscik
4a0bd87ef2 Move docker base image to alpine edge to get latest rclone release 2023-10-31 17:32:43 +00:00
Sebastian Goscik
8dc0f8a212 Bump version: 0.9.4 → 0.9.5 2023-10-07 22:52:45 +01:00
Sebastian Goscik
34252c461f changelog 2023-10-07 22:52:17 +01:00
Sebastian Goscik
acc405a1f8 Chunk event query to prevent crashing unifi protect 2023-10-07 22:50:04 +01:00
Sebastian Goscik
b66d40736c Bump dependency versions 2023-10-07 21:49:46 +01:00
cyberpower678
171796e5c3 Update unifi_protect_backup_core.py (#100)
Fix typo in connection attempts.  The application only attempts to connect once instead of 10 times.
2023-09-08 16:27:09 +01:00
Sebastian Goscik
cbc497909d linting 2023-07-29 12:07:31 +01:00
Sebastian Goscik
66b3344e29 Add download rate limiter 2023-07-29 12:07:31 +01:00
Sebastian Goscik
89cab64679 Add validation of retention/purge interval 2023-07-29 12:06:54 +01:00
Sebastian Goscik
f2f1c49ae9 Bump version: 0.9.3 → 0.9.4 2023-07-29 11:32:32 +01:00
Sebastian Goscik
8786f2ceb0 Fixed time period parsing
Also updated link to rclone docs to be more direct to the format docs
2023-07-29 11:32:32 +01:00
Sebastian Goscik
1f2a48f95e Bump version: 0.9.2 → 0.9.3 2023-07-08 16:56:23 +01:00
Sebastian Goscik
5d2391e005 Remove Arm v7 docker builds
See: https://www.linuxserver.io/blog/a-farewell-to-arm-hf
2023-07-08 16:55:12 +01:00
Sebastian Goscik
c4e9a42c1a Block all calls to protect client when the connection is
dropped and we are awaiting a reconnect
2023-07-08 16:30:09 +01:00
Sebastian Goscik
6c719c0162 Cache camera names
so an active protect connection is not need to perform actions like
uploads which don't rely on protect.
2023-07-08 15:32:47 +01:00
Sebastian Goscik
498f72a09b Bump version: 0.9.1 → 0.9.2 2023-05-24 00:45:00 +01:00
Sebastian Goscik
d0080a569b Changelog 2023-05-24 00:44:54 +01:00
Sebastian Goscik
f89388327f Fix missing event checker not ignoring unwanted cameras 2023-05-22 23:22:41 +01:00
Sebastian Goscik
0a7eb92a36 Bump version: 0.9.0 → 0.9.1 2023-04-29 09:51:33 +01:00
Sebastian Goscik
694e9c6fde updated changelog 2023-04-29 09:50:55 +01:00
Sebastian Goscik
63fdea402d Linting fixes 2023-04-29 09:49:27 +01:00
Sebastian Goscik
f4c3c68f0d Fixed download failure counting
previously it would only count as a failure if the download "succeeded" but was None
2023-04-29 09:48:46 +01:00
Igor Wolbers
e5112de35c Add extra param to purge (#86)
* Added optional argument string to pass directly to the `rclone delete` command used to purge video files. This will allow for immediate deletion of files on destinations where the file might otherwise go to a recycle bin by default.

---------

Co-authored-by: Igor Wolbers <igor@sparcobv.onmicrosoft.com>
Co-authored-by: Sebastian Goscik <sebastian.goscik@live.co.uk>
2023-04-29 08:19:41 +00:00
Sebastian Goscik
1b38cb3db3 Fix typo in readme 2023-04-26 10:20:22 +01:00
Sebastian Goscik
237d7ceeb1 Merge pull request #83 from IgorWolbers/add-service-documentation 2023-04-03 11:14:23 +00:00
Sebastian Goscik
6b1066d31e Log when an error occurs trying to add a notifier 2023-04-02 23:15:47 +01:00
Igor Wolbers
8d3ee5bdfd Running Backup Tool as a Service (LINUX ONLY) 2023-03-24 08:56:01 -04:00
18 changed files with 1997 additions and 1301 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.9.0
current_version = 0.11.0
commit = True
tag = True

View File

@@ -71,7 +71,7 @@ jobs:
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/amd64,linux/arm64,linux/arm/v7
platforms: linux/amd64,linux/arm64
push: true
tags: ghcr.io/${{ github.repository }}:${{ steps.tag_name.outputs.current_version }}, ghcr.io/${{ github.repository }}:latest

View File

@@ -4,6 +4,76 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.11.0] - 2024-06-08
### Added
- A new experimental downloader that uses the same mechanism the web ui does. Enable with
`--experimental-downloader`
### Fixed
- Support for UniFi OS 4.x.x
## [0.10.7] - 2024-03-22
### Fixed
- Set pyunifiprotect to a minimum version of 5.0.0
## [0.10.6] - 2024-03-22
### Fixed
- Bumped `pyunifiprotect` version to fix with versions of Unifi Protect after 3.0.10
## [0.10.5] - 2024-01-26
### Fixed
- Bumped `pyunifiprotect` version to fix issue with old version of yarl
## [0.10.4] - 2024-01-26
### Fixed
- Bumped `pyunifiprotect` version to fix issue caused by new video modes
## [0.10.3] - 2023-12-07
### Fixed
- Bumped `pyunifiprotect` version to fix issue caused by unifi protect returning invalid UUIDs
## [0.10.2] - 2023-11-21
### Fixed
- Issue where duplicate events were being downloaded causing database errors
- Default file path format now uses event start time instead of event end time which makes more logical sense
## [0.10.1] - 2023-11-01
### Fixed
- Event type enum conversion string was no longer converting to the enum value, this is now done explicitly.
## [0.10.0] - 2023-11-01
### Added
- Command line option to skip events longer than a given length (default 2 hours)
- Docker image is now based on alpine edge giving access to the latest version of rclone
### Fixed
- Failed uploads no longer write to the database, meaning they will be retried
- Fixed issue with chunked event fetch during initial ignore of events
- Fixed error when no events were fetched for the retention period
## [0.9.5] - 2023-10-07
### Fixed
- Errors caused by latest unifi protect version by bumping the version of pyunifiprotect used
- Queries for events are now chunked into groups of 500 which should help stop this tool crashing large
unifi protect instances.
## [0.9.4] - 2023-07-29
### Fixed
- Time period parsing, 'Y' -> 'y'
## [0.9.3] - 2023-07-08
### Fixed
- Queued up downloads etc now wait for dropped connections to be re-established.
## [0.9.2] - 2023-04-21
### Fixed
- Missing event checker ignoring the "ignored cameras" list
## [0.9.1] - 2023-04-21
### Added
- Added optional argument string to pass directly to the `rclone delete` command used to purge video files
### Fixed
- Fixed download errors not counting as failures
## [0.9.0] - 2023-03-24
### Added
- The ability to send logging out via apprise notifications

View File

@@ -1,13 +1,13 @@
# To build run:
# make docker
FROM ghcr.io/linuxserver/baseimage-alpine:3.16
FROM ghcr.io/linuxserver/baseimage-alpine:edge
LABEL maintainer="ep1cman"
WORKDIR /app
COPY dist/unifi_protect_backup-0.9.0.tar.gz sdist.tar.gz
COPY dist/unifi_protect_backup-0.11.0.tar.gz sdist.tar.gz
# https://github.com/rust-lang/cargo/issues/2808
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true
@@ -29,7 +29,7 @@ RUN \
py3-pip \
python3 && \
echo "**** install unifi-protect-backup ****" && \
pip install --no-cache-dir sdist.tar.gz && \
pip install --no-cache-dir --break-system-packages sdist.tar.gz && \
echo "**** cleanup ****" && \
apk del --purge \
build-dependencies && \
@@ -50,6 +50,9 @@ ENV TZ=UTC
ENV IGNORE_CAMERAS=""
ENV SQLITE_PATH=/config/database/events.sqlite
# Fixes issue where `platformdirs` is unable to properly detect the user directory
ENV XDG_CACHE_HOME=/config
COPY docker_root/ /
RUN mkdir -p /config/database /config/rclone

View File

@@ -48,7 +48,7 @@ In order to connect to your unifi protect instance, you will first need to setup
## Installation
*The prefered way to run this tool is using a container*
*The preferred way to run this tool is using a container*
### Docker Container
You can run this tool as a container if you prefer with the following command.
@@ -79,7 +79,7 @@ If you do not already have a `rclone.conf` file you can create one as follows:
```
$ docker run -it --rm -v $PWD:/root/.config/rclone --entrypoint rclone ghcr.io/ep1cman/unifi-protect-backup config
```
Follow the interactive configuration proceed, this will create a `rclone.conf`
Follow the interactive configuration process, this will create a `rclone.conf`
file in your current directory.
Finally, start the container:
@@ -127,12 +127,18 @@ Options:
--rclone-args TEXT Optional extra arguments to pass to `rclone rcat` directly.
Common usage for this would be to set a bandwidth limit, for
example.
--rclone-purge-args TEXT Optional extra arguments to pass to `rclone delete` directly.
Common usage for this would be to execute a permanent delete
instead of using the recycle bin on a destination. Google Drive
example: `--drive-use-trash=false`
--detection-types TEXT A comma separated list of which types of detections to backup.
Valid options are: `motion`, `person`, `vehicle`, `ring`
[default: motion,person,vehicle,ring]
--ignore-camera TEXT IDs of cameras for which events should not be backed up. Use
multiple times to ignore multiple IDs. If being set as an
environment variable the IDs should be separated by whitespace.
Alternatively, use a Unifi user with a role which has access
restricted to the subset of cameras that you wish to backup.
--file-structure-format TEXT A Python format string used to generate the file structure/name
on the rclone remote.For details of the fields available, see
the projects `README.md` file. [default: {camera_name}/{event.s
@@ -183,8 +189,15 @@ Options:
More details about supported platforms can be found here:
https://github.com/caronc/apprise
--skip-missing If set, events which are 'missing' at the start will be ignored.
--skip-missing If set, events which are 'missing' at the start will be ignored.
Subsequent missing events will be downloaded (e.g. a missed event) [default: False]
--download-rate-limit FLOAT Limit how events can be downloaded in one minute. Disabled by
default
--max-event-length INTEGER Only download events shorter than this maximum length, in
seconds [default: 7200]
--experimental-downloader If set, a new experimental download mechanism will be used to match
what the web UI does. This might be more stable if you are experiencing
a lot of failed downloads with the default downloader. [default: False]
--help Show this message and exit.
```
@@ -198,6 +211,7 @@ always take priority over environment variables):
- `RCLONE_RETENTION`
- `RCLONE_DESTINATION`
- `RCLONE_ARGS`
- `RCLONE_PURGE_ARGS`
- `IGNORE_CAMERAS`
- `DETECTION_TYPES`
- `FILE_STRUCTURE_FORMAT`
@@ -207,6 +221,9 @@ always take priority over environment variables):
- `PURGE_INTERVAL`
- `APPRISE_NOTIFIERS`
- `SKIP_MISSING`
- `DOWNLOAD_RATELIMIT`
- `MAX_EVENT_LENGTH`
- `EXPERIMENTAL_DOWNLOADER`
## File path formatting
@@ -231,6 +248,14 @@ now on, you can use the `--skip-missing` flag. This does not enable the periodic
If you use this feature it is advised that your run the tool once with this flag, then stop it once the database has been created and the events are ignored. Keeping this flag set permanently could cause events to be missed if the tool crashes and is restarted etc.
## Ignoring cameras
Cameras can be excluded from backups by either:
- Using `--ignore-camera`, see [usage](#usage)
- IDs can be obtained by scanning the logs, starting at `Found cameras:` up to the next log line (currently `NVR TZ`). You can find this section of the logs by piping the logs in to this `sed` command
`sed -n '/Found cameras:/,/NVR TZ/p'`
- Using a Unifi user with a role which has access restricted to the subset of cameras that you wish to backup.
# A note about `rclone` backends and disk wear
This tool attempts to not write the downloaded files to disk to minimise disk wear, and instead streams them directly to
rclone. Sadly, not all storage backends supported by `rclone` allow "Stream Uploads". Please refer to the `StreamUpload` column on this table to see which one do and don't: https://rclone.org/overview/#optional-features
@@ -259,6 +284,45 @@ To make this persist reboots add the following to `/etc/fstab`:
tmpfs /mnt/tmpfs tmpfs nosuid,nodev,noatime 0 0
```
# Running Backup Tool as a Service (LINUX ONLY)
You can create a service that will run the docker or local version of this backup tool. The service can be configured to launch on boot. This is likely the preferred way you want to execute the tool once you have it completely configured and tested so it is continuously running.
First create a service configuration file. You can replace `protectbackup` in the filename below with the name you wish to use for your service, if you change it remember to change the other locations in the following scripts as well.
```
sudo nano /lib/systemd/system/protectbackup.service
```
Next edit the content and fill in the 4 placeholders indicated by {}, replace these placeholders (including the leading `{` and trailing `}` characters) with the values you are using.
```
[Unit]
Description=Unifi Protect Backup
[Service]
User={your machine username}
Group={your machine user group, could be the same as the username}
Restart=on-abort
WorkingDirectory=/home/{your machine username}
ExecStart={put your complete docker or local command here}
[Install]
WantedBy=multi-user.target
```
Now enable the service and then start the service.
```
sudo systemctl enable protectbackup.service
sudo systemctl start protectbackup.service
```
To check the status of the service use this command.
```
sudo systemctl status protectbackup.service --no-pager
```
# Debugging
If you need to debug your rclone setup, you can invoke rclone directly like so:

View File

@@ -1,6 +1,6 @@
sources = unifi_protect_backup
container_name ?= ghcr.io/ep1cman/unifi-protect-backup
container_arches ?= linux/amd64,linux/arm64,linux/arm/v7
container_arches ?= linux/amd64,linux/arm64
.PHONY: test format lint unittest coverage pre-commit clean
test: format lint unittest

2506
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,7 @@
[tool]
[tool.poetry]
name = "unifi_protect_backup"
version = "0.9.0"
version = "0.11.0"
homepage = "https://github.com/ep1cman/unifi-protect-backup"
description = "Python tool to backup unifi event clips in realtime."
authors = ["sebastian.goscik <sebastian@goscik.com>"]
@@ -23,12 +23,14 @@ packages = [
[tool.poetry.dependencies]
python = ">=3.9.0,<4.0"
click = "8.0.1"
pyunifiprotect = "^4.0.11"
aiorun = "^2022.11.1"
aiorun = "^2023.7.2"
aiosqlite = "^0.17.0"
python-dateutil = "^2.8.2"
apprise = "^1.3.0"
apprise = "^1.5.0"
expiring-dict = "^1.1.0"
async-lru = "^2.0.4"
aiolimiter = "^1.1.0"
pyunifiprotect = {git = "https://github.com/ep1cman/pyunifiprotect.git", rev = "experimental"}
[tool.poetry.group.dev]
optional = true

View File

@@ -2,9 +2,10 @@
__author__ = """sebastian.goscik"""
__email__ = 'sebastian@goscik.com'
__version__ = '0.9.0'
__version__ = '0.11.0'
from .downloader import VideoDownloader
from .downloader_experimental import VideoDownloaderExperimental
from .event_listener import EventListener
from .purge import Purge
from .uploader import VideoUploader

View File

@@ -1,7 +1,10 @@
"""Console script for unifi_protect_backup."""
import re
import click
from aiorun import run # type: ignore
from dateutil.relativedelta import relativedelta
from unifi_protect_backup import __version__
from unifi_protect_backup.unifi_protect_backup_core import UnifiProtectBackup
@@ -22,6 +25,26 @@ def _parse_detection_types(ctx, param, value):
return types
def parse_rclone_retention(ctx, param, retention) -> relativedelta:
"""Parses the rclone `retention` parameter into a relativedelta which can then be used to calculate datetimes."""
matches = {k: int(v) for v, k in re.findall(r"([\d]+)(ms|s|m|h|d|w|M|y)", retention)}
# Check that we matched the whole string
if len(retention) != len(''.join([f"{v}{k}" for k, v in matches.items()])):
raise click.BadParameter("See here for expected format: https://rclone.org/docs/#time-option")
return relativedelta(
microseconds=matches.get("ms", 0) * 1000,
seconds=matches.get("s", 0),
minutes=matches.get("m", 0),
hours=matches.get("h", 0),
days=matches.get("d", 0),
weeks=matches.get("w", 0),
months=matches.get("M", 0),
years=matches.get("y", 0),
)
@click.command(context_settings=dict(max_content_width=100))
@click.version_option(__version__)
@click.option('--address', required=True, envvar='UFP_ADDRESS', help='Address of Unifi Protect instance')
@@ -47,8 +70,9 @@ def _parse_detection_types(ctx, param, value):
default='7d',
show_default=True,
envvar='RCLONE_RETENTION',
help="How long should event clips be backed up for. Format as per the `--max-age` argument of "
"`rclone` (https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)",
help="How long should event clips be backed up for. Format as per the `--max-age` argument of `rclone` "
"(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)",
callback=parse_rclone_retention,
)
@click.option(
'--rclone-args',
@@ -57,6 +81,14 @@ def _parse_detection_types(ctx, param, value):
help="Optional extra arguments to pass to `rclone rcat` directly. Common usage for this would "
"be to set a bandwidth limit, for example.",
)
@click.option(
'--rclone-purge-args',
default='',
envvar='RCLONE_PURGE_ARGS',
help="Optional extra arguments to pass to `rclone delete` directly. Common usage for this would "
"be to execute a permanent delete instead of using the recycle bin on a destination. "
"Google Drive example: `--drive-use-trash=false`",
)
@click.option(
'--detection-types',
envvar='DETECTION_TYPES',
@@ -72,7 +104,9 @@ def _parse_detection_types(ctx, param, value):
multiple=True,
envvar="IGNORE_CAMERAS",
help="IDs of cameras for which events should not be backed up. Use multiple times to ignore "
"multiple IDs. If being set as an environment variable the IDs should be separated by whitespace.",
"multiple IDs. If being set as an environment variable the IDs should be separated by whitespace. "
"Alternatively, use a Unifi user with a role which has access restricted to the subset of cameras "
"that you wish to backup.",
)
@click.option(
'--file-structure-format',
@@ -131,6 +165,7 @@ all warnings, and websocket data
envvar='PURGE_INTERVAL',
help="How frequently to check for file to purge.\n\nNOTE: Can create a lot of API calls, so be careful if "
"your cloud provider charges you per api call",
callback=parse_rclone_retention,
)
@click.option(
'--apprise-notifier',
@@ -162,6 +197,35 @@ If set, events which are 'missing' at the start will be ignored.
Subsequent missing events will be downloaded (e.g. a missed event)
""",
)
@click.option(
'--download-rate-limit',
default=None,
show_default=True,
envvar='DOWNLOAD_RATELIMIT',
type=float,
help="Limit how events can be downloaded in one minute. Disabled by default",
)
@click.option(
'--max-event-length',
default=2 * 60 * 60,
show_default=True,
envvar='MAX_EVENT_LENGTH',
type=int,
help="Only download events shorter than this maximum length, in seconds",
)
@click.option(
'--experimental-downloader',
'use_experimental_downloader',
default=False,
show_default=True,
is_flag=True,
envvar='EXPERIMENTAL_DOWNLOADER',
help="""\b
If set, a new experimental download mechanism will be used to match
what the web UI does. This might be more stable if you are experiencing
a lot of failed downloads with the default downloader.
""",
)
def main(**kwargs):
"""A Python based tool for backing up Unifi Protect event clips as they occur."""
event_listener = UnifiProtectBackup(**kwargs)

View File

@@ -11,6 +11,7 @@ import aiosqlite
import pytz
from aiohttp.client_exceptions import ClientPayloadError
from expiring_dict import ExpiringDict # type: ignore
from aiolimiter import AsyncLimiter
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.nvr import Event
from pyunifiprotect.data.types import EventType
@@ -48,6 +49,8 @@ class VideoDownloader:
download_queue: asyncio.Queue,
upload_queue: VideoQueue,
color_logging: bool,
download_rate_limit: float,
max_event_length: timedelta,
):
"""Init.
@@ -57,6 +60,8 @@ class VideoDownloader:
download_queue (asyncio.Queue): Queue to get event details from
upload_queue (VideoQueue): Queue to place downloaded videos on
color_logging (bool): Whether or not to add color to logging output
download_rate_limit (float): Limit how events can be downloaded in one minute",
max_event_length (timedelta): Maximum length in seconds for an event to be considered valid and downloaded
"""
self._protect: ProtectApiClient = protect
self._db: aiosqlite.Connection = db
@@ -64,6 +69,9 @@ class VideoDownloader:
self.upload_queue: VideoQueue = upload_queue
self.current_event = None
self._failures = ExpiringDict(60 * 60 * 12) # Time to live = 12h
self._download_rate_limit = download_rate_limit
self._max_event_length = max_event_length
self._limiter = AsyncLimiter(self._download_rate_limit) if self._download_rate_limit is not None else None
self.base_logger = logging.getLogger(__name__)
setup_event_logger(self.base_logger, color_logging)
@@ -81,8 +89,16 @@ class VideoDownloader:
"""Main loop."""
self.logger.info("Starting Downloader")
while True:
if self._limiter:
self.logger.debug("Waiting for rate limit")
await self._limiter.acquire()
try:
# Wait for unifi protect to be connected
await self._protect.connect_event.wait()
event = await self.download_queue.get()
self.current_event = event
self.logger = logging.LoggerAdapter(self.base_logger, {'event': f' [{event.id}]'})
@@ -98,14 +114,19 @@ class VideoDownloader:
self.logger.debug(f"Video Download Buffer: {output_queue_current_size}/{output_queue_max_size}")
self.logger.debug(f" Camera: {await get_camera_name(self._protect, event.camera_id)}")
if event.type == EventType.SMART_DETECT:
self.logger.debug(f" Type: {event.type} ({', '.join(event.smart_detect_types)})")
self.logger.debug(f" Type: {event.type.value} ({', '.join(event.smart_detect_types)})")
else:
self.logger.debug(f" Type: {event.type}")
self.logger.debug(f" Type: {event.type.value}")
self.logger.debug(f" Start: {event.start.strftime('%Y-%m-%dT%H-%M-%S')} ({event.start.timestamp()})")
self.logger.debug(f" End: {event.end.strftime('%Y-%m-%dT%H-%M-%S')} ({event.end.timestamp()})")
duration = (event.end - event.start).total_seconds()
self.logger.debug(f" Duration: {duration}s")
# Skip invalid events
if not self._valid_event(event):
await self._ignore_event(event)
continue
# Unifi protect does not return full video clips if the clip is requested too soon.
# There are two issues at play here:
# - Protect will only cut a clip on an keyframe which happen every 5s
@@ -119,33 +140,26 @@ class VideoDownloader:
self.logger.debug(f" Sleeping ({sleep_time}s) to ensure clip is ready to download...")
await asyncio.sleep(sleep_time)
video = await self._download(event)
if video is None:
try:
video = await self._download(event)
assert video is not None
except Exception as e:
# Increment failure count
if event.id not in self._failures:
self._failures[event.id] = 1
else:
self._failures[event.id] += 1
self.logger.warning(f"Event failed download attempt {self._failures[event.id]}")
self.logger.warning(f"Event failed download attempt {self._failures[event.id]}", exc_info=e)
if self._failures[event.id] >= 10:
self.logger.error(
"Event has failed to download 10 times in a row. Permanently ignoring this event"
)
# ignore event
self.logger.extra_debug(f"Ignoring event '{event.id}'")
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
await self._db.commit()
await self._ignore_event(event)
continue
# Remove successfully downloaded event from failures list
elif event.id in self._failures:
if event.id in self._failures:
del self._failures[event.id]
# Get the actual length of the downloaded video using ffprobe
@@ -180,6 +194,15 @@ class VideoDownloader:
self.logger.debug(f" Downloaded video size: {human_readable_size(len(video))}s")
return video
async def _ignore_event(self, event):
self.logger.warning("Ignoring event")
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
await self._db.commit()
async def _check_video_length(self, video, duration):
"""Check if the downloaded event is at least the length of the event, warn otherwise.
@@ -194,3 +217,11 @@ class VideoDownloader:
self.logger.debug(msg)
except SubprocessException as e:
self.logger.warning(" `ffprobe` failed", exc_info=e)
def _valid_event(self, event):
duration = event.end - event.start
if duration > self._max_event_length:
self.logger.warning(f"Event longer ({duration}) than max allowed length {self._max_event_length}")
return False
return True

View File

@@ -0,0 +1,228 @@
# noqa: D100
import asyncio
import json
import logging
import shutil
from datetime import datetime, timedelta, timezone
from typing import Optional
import aiosqlite
import pytz
from aiohttp.client_exceptions import ClientPayloadError
from expiring_dict import ExpiringDict # type: ignore
from aiolimiter import AsyncLimiter
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.nvr import Event
from pyunifiprotect.data.types import EventType
from unifi_protect_backup.utils import (
SubprocessException,
VideoQueue,
get_camera_name,
human_readable_size,
run_command,
setup_event_logger,
)
async def get_video_length(video: bytes) -> float:
"""Uses ffprobe to get the length of the video file passed in as a byte stream."""
returncode, stdout, stderr = await run_command(
'ffprobe -v quiet -show_streams -select_streams v:0 -of json -', video
)
if returncode != 0:
raise SubprocessException(stdout, stderr, returncode)
json_data = json.loads(stdout)
return float(json_data['streams'][0]['duration'])
class VideoDownloaderExperimental:
"""Downloads event video clips from Unifi Protect."""
def __init__(
self,
protect: ProtectApiClient,
db: aiosqlite.Connection,
download_queue: asyncio.Queue,
upload_queue: VideoQueue,
color_logging: bool,
download_rate_limit: float,
max_event_length: timedelta,
):
"""Init.
Args:
protect (ProtectApiClient): UniFi Protect API client to use
db (aiosqlite.Connection): Async SQLite database to check for missing events
download_queue (asyncio.Queue): Queue to get event details from
upload_queue (VideoQueue): Queue to place downloaded videos on
color_logging (bool): Whether or not to add color to logging output
download_rate_limit (float): Limit how events can be downloaded in one minute",
max_event_length (timedelta): Maximum length in seconds for an event to be considered valid and downloaded
"""
self._protect: ProtectApiClient = protect
self._db: aiosqlite.Connection = db
self.download_queue: asyncio.Queue = download_queue
self.upload_queue: VideoQueue = upload_queue
self.current_event = None
self._failures = ExpiringDict(60 * 60 * 12) # Time to live = 12h
self._download_rate_limit = download_rate_limit
self._max_event_length = max_event_length
self._limiter = AsyncLimiter(self._download_rate_limit) if self._download_rate_limit is not None else None
self.base_logger = logging.getLogger(__name__)
setup_event_logger(self.base_logger, color_logging)
self.logger = logging.LoggerAdapter(self.base_logger, {'event': ''})
# Check if `ffprobe` is available
ffprobe = shutil.which('ffprobe')
if ffprobe is not None:
self.logger.debug(f"ffprobe found: {ffprobe}")
self._has_ffprobe = True
else:
self._has_ffprobe = False
async def start(self):
"""Main loop."""
self.logger.info("Starting Downloader")
while True:
if self._limiter:
self.logger.debug("Waiting for rate limit")
await self._limiter.acquire()
try:
# Wait for unifi protect to be connected
await self._protect.connect_event.wait()
event = await self.download_queue.get()
self.current_event = event
self.logger = logging.LoggerAdapter(self.base_logger, {'event': f' [{event.id}]'})
# Fix timezones since pyunifiprotect sets all timestamps to UTC. Instead localize them to
# the timezone of the unifi protect NVR.
event.start = event.start.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
event.end = event.end.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
self.logger.info(f"Downloading event: {event.id}")
self.logger.debug(f"Remaining Download Queue: {self.download_queue.qsize()}")
output_queue_current_size = human_readable_size(self.upload_queue.qsize())
output_queue_max_size = human_readable_size(self.upload_queue.maxsize)
self.logger.debug(f"Video Download Buffer: {output_queue_current_size}/{output_queue_max_size}")
self.logger.debug(f" Camera: {await get_camera_name(self._protect, event.camera_id)}")
if event.type == EventType.SMART_DETECT:
self.logger.debug(f" Type: {event.type.value} ({', '.join(event.smart_detect_types)})")
else:
self.logger.debug(f" Type: {event.type.value}")
self.logger.debug(f" Start: {event.start.strftime('%Y-%m-%dT%H-%M-%S')} ({event.start.timestamp()})")
self.logger.debug(f" End: {event.end.strftime('%Y-%m-%dT%H-%M-%S')} ({event.end.timestamp()})")
duration = (event.end - event.start).total_seconds()
self.logger.debug(f" Duration: {duration}s")
# Skip invalid events
if not self._valid_event(event):
await self._ignore_event(event)
continue
# Unifi protect does not return full video clips if the clip is requested too soon.
# There are two issues at play here:
# - Protect will only cut a clip on an keyframe which happen every 5s
# - Protect's pipeline needs a finite amount of time to make a clip available
# So we will wait 1.5x the keyframe interval to ensure that there is always ample video
# stored and Protect can return a full clip (which should be at least the length requested,
# but often longer)
time_since_event_ended = datetime.utcnow().replace(tzinfo=timezone.utc) - event.end
sleep_time = (timedelta(seconds=5 * 1.5) - time_since_event_ended).total_seconds()
if sleep_time > 0:
self.logger.debug(f" Sleeping ({sleep_time}s) to ensure clip is ready to download...")
await asyncio.sleep(sleep_time)
try:
video = await self._download(event)
assert video is not None
except Exception as e:
# Increment failure count
if event.id not in self._failures:
self._failures[event.id] = 1
else:
self._failures[event.id] += 1
self.logger.warning(f"Event failed download attempt {self._failures[event.id]}", exc_info=e)
if self._failures[event.id] >= 10:
self.logger.error(
"Event has failed to download 10 times in a row. Permanently ignoring this event"
)
await self._ignore_event(event)
continue
# Remove successfully downloaded event from failures list
if event.id in self._failures:
del self._failures[event.id]
# Get the actual length of the downloaded video using ffprobe
if self._has_ffprobe:
await self._check_video_length(video, duration)
await self.upload_queue.put((event, video))
self.logger.debug("Added to upload queue")
self.current_event = None
except Exception as e:
self.logger.error(f"Unexpected exception occurred, abandoning event {event.id}:", exc_info=e)
async def _download(self, event: Event) -> Optional[bytes]:
"""Downloads the video clip for the given event."""
self.logger.debug(" Downloading video...")
for x in range(5):
assert isinstance(event.camera_id, str)
assert isinstance(event.start, datetime)
assert isinstance(event.end, datetime)
try:
prepared_video_file = await self._protect.prepare_camera_video(event.camera_id, event.start, event.end)
video = await self._protect.download_camera_video(event.camera_id, prepared_video_file['fileName'])
assert isinstance(video, bytes)
break
except (AssertionError, ClientPayloadError, TimeoutError) as e:
self.logger.warning(f" Failed download attempt {x+1}, retying in 1s", exc_info=e)
await asyncio.sleep(1)
else:
self.logger.error(f"Download failed after 5 attempts, abandoning event {event.id}:")
return None
self.logger.debug(f" Downloaded video size: {human_readable_size(len(video))}s")
return video
async def _ignore_event(self, event):
self.logger.warning("Ignoring event")
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
await self._db.commit()
async def _check_video_length(self, video, duration):
"""Check if the downloaded event is at least the length of the event, warn otherwise.
It is expected for events to regularly be slightly longer than the event specified
"""
try:
downloaded_duration = await get_video_length(video)
msg = f" Downloaded video length: {downloaded_duration:.3f}s" f"({downloaded_duration - duration:+.3f}s)"
if downloaded_duration < duration:
self.logger.warning(msg)
else:
self.logger.debug(msg)
except SubprocessException as e:
self.logger.warning(" `ffprobe` failed", exc_info=e)
def _valid_event(self, event):
duration = event.end - event.start
if duration > self._max_event_length:
self.logger.warning(f"Event longer ({duration}) than max allowed length {self._max_event_length}")
return False
return True

View File

@@ -61,7 +61,7 @@ class EventListener:
return
if msg.new_obj.camera_id in self.ignore_cameras:
return
if msg.new_obj.end is None:
if 'end' not in msg.changed_data:
return
if msg.new_obj.type not in [EventType.MOTION, EventType.SMART_DETECT, EventType.RING]:
return
@@ -100,6 +100,7 @@ class EventListener:
if self._protect.check_ws():
logger.extra_debug("Websocket is connected.")
else:
self._protect.connect_event.clear()
logger.warning("Lost connection to Unifi Protect.")
# Unsubscribe, close the session.
@@ -125,4 +126,5 @@ class EventListener:
# Back off for a little while
await asyncio.sleep(10)
self._protect.connect_event.set()
logger.info("Re-established connection to Unifi Protect and to the websocket.")

View File

@@ -55,63 +55,83 @@ class MissingEventChecker:
self.interval: int = interval
async def _get_missing_events(self) -> List[Event]:
# Get list of events that need to be backed up from unifi protect
unifi_events = await self._protect.get_events(
start=datetime.now() - self.retention,
end=datetime.now(),
types=[EventType.MOTION, EventType.SMART_DETECT, EventType.RING],
)
unifi_events = {event.id: event for event in unifi_events}
start_time = datetime.now() - self.retention
end_time = datetime.now()
chunk_size = 500
# Get list of events that have been backed up from the database
while True:
# Get list of events that need to be backed up from unifi protect
logger.extra_debug(f"Fetching events for interval: {start_time} - {end_time}")
events_chunk = await self._protect.get_events(
start=start_time,
end=end_time,
types=[EventType.MOTION, EventType.SMART_DETECT, EventType.RING],
limit=chunk_size,
)
# events(id, type, camera_id, start, end)
async with self._db.execute("SELECT * FROM events") as cursor:
rows = await cursor.fetchall()
db_event_ids = {row[0] for row in rows}
if not events_chunk:
break # There were no events to backup
# Prevent re-adding events currently in the download/upload queue
downloading_event_ids = {event.id for event in self._downloader.download_queue._queue} # type: ignore
current_download = self._downloader.current_event
if current_download is not None:
downloading_event_ids.add(current_download.id)
start_time = events_chunk[-1].end
unifi_events = {event.id: event for event in events_chunk}
uploading_event_ids = {event.id for event, video in self._uploader.upload_queue._queue} # type: ignore
current_upload = self._uploader.current_event
if current_upload is not None:
uploading_event_ids.add(current_upload.id)
# Get list of events that have been backed up from the database
missing_event_ids = set(unifi_events.keys()) - (db_event_ids | downloading_event_ids | uploading_event_ids)
# events(id, type, camera_id, start, end)
async with self._db.execute("SELECT * FROM events") as cursor:
rows = await cursor.fetchall()
db_event_ids = {row[0] for row in rows}
def wanted_event_type(event_id):
event = unifi_events[event_id]
if event.start is None or event.end is None:
return False # This event is still on-going
if event.type is EventType.MOTION and "motion" not in self.detection_types:
return False
if event.type is EventType.RING and "ring" not in self.detection_types:
return False
elif event.type is EventType.SMART_DETECT:
for event_smart_detection_type in event.smart_detect_types:
if event_smart_detection_type not in self.detection_types:
return False
return True
# Prevent re-adding events currently in the download/upload queue
downloading_event_ids = {event.id for event in self._downloader.download_queue._queue} # type: ignore
current_download = self._downloader.current_event
if current_download is not None:
downloading_event_ids.add(current_download.id)
wanted_event_ids = set(filter(wanted_event_type, missing_event_ids))
uploading_event_ids = {event.id for event, video in self._uploader.upload_queue._queue} # type: ignore
current_upload = self._uploader.current_event
if current_upload is not None:
uploading_event_ids.add(current_upload.id)
return [unifi_events[id] for id in wanted_event_ids]
missing_event_ids = set(unifi_events.keys()) - (db_event_ids | downloading_event_ids | uploading_event_ids)
# Exclude events of unwanted types
def wanted_event_type(event_id):
event = unifi_events[event_id]
if event.start is None or event.end is None:
return False # This event is still on-going
if event.camera_id in self.ignore_cameras:
return False
if event.type is EventType.MOTION and "motion" not in self.detection_types:
return False
if event.type is EventType.RING and "ring" not in self.detection_types:
return False
elif event.type is EventType.SMART_DETECT:
for event_smart_detection_type in event.smart_detect_types:
if event_smart_detection_type not in self.detection_types:
return False
return True
wanted_event_ids = set(filter(wanted_event_type, missing_event_ids))
# Yeild events one by one to allow the async loop to start other task while
# waiting on the full list of events
for id in wanted_event_ids:
yield unifi_events[id]
# Last chunk was in-complete, we can stop now
if len(events_chunk) < chunk_size:
break
async def ignore_missing(self):
"""Ignore missing events by adding them to the event table."""
wanted_events = await self._get_missing_events()
logger.info(f" Ignoring missing events")
logger.info(f" Ignoring {len(wanted_events)} missing events")
for event in wanted_events:
async for event in self._get_missing_events():
logger.extra_debug(f"Ignoring event '{event.id}'")
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type}', '{event.camera_id}',"
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
await self._db.commit()
@@ -121,25 +141,24 @@ class MissingEventChecker:
logger.info("Starting Missing Event Checker")
while True:
try:
logger.extra_debug("Running check for missing events...")
shown_warning = False
wanted_events = await self._get_missing_events()
# Wait for unifi protect to be connected
await self._protect.connect_event.wait()
logger.debug(f" Undownloaded events of wanted types: {len(wanted_events)}")
logger.debug("Running check for missing events...")
if len(wanted_events) > 20:
logger.warning(f" Adding {len(wanted_events)} missing events to backup queue")
missing_logger = logger.extra_debug
else:
missing_logger = logger.warning
async for event in self._get_missing_events():
if not shown_warning:
logger.warning(f" Found missing events, adding to backup queue")
shown_warning = True
for event in wanted_events:
if event.type != EventType.SMART_DETECT:
event_name = f"{event.id} ({event.type})"
event_name = f"{event.id} ({event.type.value})"
else:
event_name = f"{event.id} ({', '.join(event.smart_detect_types)})"
missing_logger(
logger.extra_debug(
f" Adding missing event to backup queue: {event_name}"
f" ({event.start.strftime('%Y-%m-%dT%H-%M-%S')} -"
f" {event.end.strftime('%Y-%m-%dT%H-%M-%S')})"

View File

@@ -12,9 +12,9 @@ from unifi_protect_backup.utils import run_command, wait_until
logger = logging.getLogger(__name__)
async def delete_file(file_path):
async def delete_file(file_path, rclone_purge_args):
"""Deletes `file_path` via rclone."""
returncode, stdout, stderr = await run_command(f'rclone delete -vv "{file_path}"')
returncode, stdout, stderr = await run_command(f'rclone delete -vv "{file_path}" {rclone_purge_args}')
if returncode != 0:
logger.error(f" Failed to delete file: '{file_path}'")
@@ -35,6 +35,7 @@ class Purge:
retention: relativedelta,
rclone_destination: str,
interval: relativedelta = relativedelta(days=1),
rclone_purge_args: str = "",
):
"""Init.
@@ -43,11 +44,13 @@ class Purge:
retention (relativedelta): How long clips should be kept
rclone_destination (str): What rclone destination the clips are stored in
interval (relativedelta): How often to purge old clips
rclone_purge_args (str): Optional extra arguments to pass to `rclone delete` directly.
"""
self._db: aiosqlite.Connection = db
self.retention: relativedelta = retention
self.rclone_destination: str = rclone_destination
self.interval: relativedelta = interval
self.rclone_purge_args: str = rclone_purge_args
async def start(self):
"""Main loop - runs forever."""
@@ -68,7 +71,7 @@ class Purge:
async with self._db.execute(f"SELECT * FROM backups WHERE id = '{event_id}'") as backup_cursor:
async for _, remote, file_path in backup_cursor:
logger.debug(f" Deleted: {remote}:{file_path}")
await delete_file(f"{remote}:{file_path}")
await delete_file(f"{remote}:{file_path}", self.rclone_purge_args)
deleted_a_file = True
# delete event from database

View File

@@ -1,12 +1,14 @@
"""Main module."""
import asyncio
import logging
import os
import shutil
from datetime import datetime, timezone
from datetime import datetime, timezone, timedelta
from typing import Callable, List
import aiosqlite
from dateutil.relativedelta import relativedelta
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.types import ModelType
@@ -15,17 +17,11 @@ from unifi_protect_backup import (
MissingEventChecker,
Purge,
VideoDownloader,
VideoDownloaderExperimental,
VideoUploader,
notifications,
)
from unifi_protect_backup.utils import (
SubprocessException,
VideoQueue,
human_readable_size,
parse_rclone_retention,
run_command,
setup_logging,
)
from unifi_protect_backup.utils import SubprocessException, VideoQueue, human_readable_size, run_command, setup_logging
logger = logging.getLogger(__name__)
@@ -57,19 +53,23 @@ class UnifiProtectBackup:
password: str,
verify_ssl: bool,
rclone_destination: str,
retention: str,
retention: relativedelta,
rclone_args: str,
rclone_purge_args: str,
detection_types: List[str],
ignore_cameras: List[str],
file_structure_format: str,
verbose: int,
download_buffer_size: int,
purge_interval: str,
purge_interval: relativedelta,
apprise_notifiers: str,
skip_missing: bool,
max_event_length: int,
sqlite_path: str = "events.sqlite",
color_logging=False,
color_logging: bool = False,
download_rate_limit: float = None,
port: int = 443,
use_experimental_downloader: bool = False,
):
"""Will configure logging settings and the Unifi Protect API (but not actually connect).
@@ -87,6 +87,7 @@ class UnifiProtectBackup:
(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)
rclone_args (str): A bandwidth limit which is passed to the `--bwlimit` argument of
`rclone` (https://rclone.org/docs/#bwlimit-bandwidth-spec)
rclone_purge_args (str): Optional extra arguments to pass to `rclone delete` directly.
detection_types (List[str]): List of which detection types to backup.
ignore_cameras (List[str]): List of camera IDs for which to not backup events.
file_structure_format (str): A Python format string for output file path.
@@ -97,13 +98,20 @@ class UnifiProtectBackup:
skip_missing (bool): If initial missing events should be ignored
sqlite_path (str): Path where to find/create sqlite database
color_logging (bool): Whether to add color to logging output or not
download_rate_limit (float): Limit how events can be downloaded in one minute. Disabled by default",
max_event_length (int): Maximum length in seconds for an event to be considered valid and downloaded
use_experimental_downloader (bool): Use the new experimental downloader (the same method as used by the webUI)
"""
for notifier in apprise_notifiers:
notifications.add_notification_service(notifier)
self.color_logging = color_logging
setup_logging(verbose, self.color_logging)
for notifier in apprise_notifiers:
try:
notifications.add_notification_service(notifier)
except Exception as e:
logger.error(f"Error occurred when setting up logger `{notifier}`", exc_info=e)
raise
logger.debug("Config:")
logger.debug(f" {address=}")
logger.debug(f" {port=}")
@@ -117,6 +125,7 @@ class UnifiProtectBackup:
logger.debug(f" {rclone_destination=}")
logger.debug(f" {retention=}")
logger.debug(f" {rclone_args=}")
logger.debug(f" {rclone_purge_args=}")
logger.debug(f" {ignore_cameras=}")
logger.debug(f" {verbose=}")
logger.debug(f" {detection_types=}")
@@ -126,10 +135,14 @@ class UnifiProtectBackup:
logger.debug(f" {purge_interval=}")
logger.debug(f" {apprise_notifiers=}")
logger.debug(f" {skip_missing=}")
logger.debug(f" {download_rate_limit=} events per minute")
logger.debug(f" {max_event_length=}s")
logger.debug(f" {use_experimental_downloader=}")
self.rclone_destination = rclone_destination
self.retention = parse_rclone_retention(retention)
self.retention = retention
self.rclone_args = rclone_args
self.rclone_purge_args = rclone_purge_args
self.file_structure_format = file_structure_format
self.address = address
@@ -154,8 +167,11 @@ class UnifiProtectBackup:
self._sqlite_path = sqlite_path
self._db = None
self._download_buffer_size = download_buffer_size
self._purge_interval = parse_rclone_retention(purge_interval)
self._purge_interval = purge_interval
self._skip_missing = skip_missing
self._download_rate_limit = download_rate_limit
self._max_event_length = timedelta(seconds=max_event_length)
self._use_experimental_downloader = use_experimental_downloader
async def start(self):
"""Bootstrap the backup process and kick off the main loop.
@@ -175,7 +191,7 @@ class UnifiProtectBackup:
# Start the pyunifiprotect connection by calling `update`
logger.info("Connecting to Unifi Protect...")
for attempts in range(1):
for attempts in range(10):
try:
await self._protect.update()
break
@@ -185,6 +201,11 @@ class UnifiProtectBackup:
else:
raise ConnectionError("Failed to connect to UniFi Protect after 10 attempts")
# Add a lock to the protect client that can be used to prevent code accessing the client when it has
# lost connection
self._protect.connect_event = asyncio.Event()
self._protect.connect_event.set()
# Get a mapping of camera ids -> names
logger.info("Found cameras:")
for camera in self._protect.bootstrap.cameras.values():
@@ -210,7 +231,20 @@ class UnifiProtectBackup:
# Create downloader task
# This will download video files to its buffer
downloader = VideoDownloader(self._protect, self._db, download_queue, upload_queue, self.color_logging)
if self._use_experimental_downloader:
downloader_cls = VideoDownloaderExperimental
else:
downloader_cls = VideoDownloader
downloader = downloader_cls(
self._protect,
self._db,
download_queue,
upload_queue,
self.color_logging,
self._download_rate_limit,
self._max_event_length,
)
tasks.append(downloader.start())
# Create upload task
@@ -234,7 +268,9 @@ class UnifiProtectBackup:
# Create purge task
# This will, every midnight, purge old backups from the rclone remotes and database
purge = Purge(self._db, self.retention, self.rclone_destination, self._purge_interval)
purge = Purge(
self._db, self.retention, self.rclone_destination, self._purge_interval, self.rclone_purge_args
)
tasks.append(purge.start())
# Create missing event task

View File

@@ -9,7 +9,14 @@ import aiosqlite
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.nvr import Event
from unifi_protect_backup.utils import VideoQueue, get_camera_name, human_readable_size, run_command, setup_event_logger
from unifi_protect_backup.utils import (
VideoQueue,
get_camera_name,
human_readable_size,
run_command,
setup_event_logger,
SubprocessException,
)
class VideoUploader:
@@ -74,10 +81,13 @@ class VideoUploader:
destination = await self._generate_file_path(event)
self.logger.debug(f" Destination: {destination}")
await self._upload_video(video, destination, self._rclone_args)
await self._update_database(event, destination)
try:
await self._upload_video(video, destination, self._rclone_args)
await self._update_database(event, destination)
self.logger.debug("Uploaded")
except SubprocessException:
self.logger.error(f" Failed to upload file: '{destination}'")
self.logger.debug("Uploaded")
self.current_event = None
except Exception as e:
@@ -99,7 +109,7 @@ class VideoUploader:
"""
returncode, stdout, stderr = await run_command(f'rclone rcat -vv {rclone_args} "{destination}"', video)
if returncode != 0:
self.logger.error(f" Failed to upload file: '{destination}'")
raise SubprocessException(stdout, stderr, returncode)
async def _update_database(self, event: Event, destination: str):
"""Add the backed up event to the database along with where it was backed up to."""
@@ -107,7 +117,7 @@ class VideoUploader:
assert isinstance(event.end, datetime)
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type}', '{event.camera_id}',"
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
@@ -147,9 +157,9 @@ class VideoUploader:
format_context = {
"event": event,
"duration_seconds": (event.end - event.start).total_seconds(),
"detection_type": f"{event.type} ({' '.join(event.smart_detect_types)})"
"detection_type": f"{event.type.value} ({' '.join(event.smart_detect_types)})"
if event.smart_detect_types
else f"{event.type}",
else f"{event.type.value}",
"camera_name": await get_camera_name(self._protect, event.camera_id),
}

View File

@@ -7,7 +7,7 @@ from datetime import datetime
from typing import List, Optional
from apprise import NotifyType
from dateutil.relativedelta import relativedelta
from async_lru import alru_cache
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.nvr import Event
@@ -277,11 +277,17 @@ def human_readable_to_float(num: str):
return value * multiplier
# Cached so that actions like uploads can continue when the connection to the api is lost
# No max size, and a 6 hour ttl
@alru_cache(None, ttl=60 * 60 * 6)
async def get_camera_name(protect: ProtectApiClient, id: str):
"""Returns the name for the camera with the given ID.
If the camera ID is not know, it tries refreshing the cached data
"""
# Wait for unifi protect to be connected
await protect.connect_event.wait() # type: ignore
try:
return protect.bootstrap.cameras[id].name
except KeyError:
@@ -321,21 +327,6 @@ class SubprocessException(Exception):
return f"Return Code: {self.returncode}\nStdout:\n{self.stdout}\nStderr:\n{self.stderr}"
def parse_rclone_retention(retention: str) -> relativedelta:
"""Parses the rclone `retention` parameter into a relativedelta which can then be used to calculate datetimes."""
matches = {k: int(v) for v, k in re.findall(r"([\d]+)(ms|s|m|h|d|w|M|y)", retention)}
return relativedelta(
microseconds=matches.get("ms", 0) * 1000,
seconds=matches.get("s", 0),
minutes=matches.get("m", 0),
hours=matches.get("h", 0),
days=matches.get("d", 0),
weeks=matches.get("w", 0),
months=matches.get("M", 0),
years=matches.get("Y", 0),
)
async def run_command(cmd: str, data=None):
"""Runs the given command returning the exit code, stdout and stderr."""
proc = await asyncio.create_subprocess_shell(