Compare commits

..

254 Commits
v0.2.1 ... dev

Author SHA1 Message Date
Sebastian Goscik
c4c5468816 Add support for Finger Print, NFC Card Scan, and Audio Detections
Also refactored code that checks if an event should be backed up into one common shared function.
2025-07-07 01:17:31 +01:00
Sebastian Goscik
be2a1ee921 Add a storage quota purger 2025-07-07 01:17:31 +01:00
Sebastian Goscik
ef06d2a4d4 Bump version: 0.13.0 → 0.13.1 2025-06-26 02:21:41 +01:00
Sebastian Goscik
12c8539977 Bump uiprotect version 2025-06-26 02:21:41 +01:00
Sebastian Goscik
474d3c32fa Linting 2025-06-26 02:21:41 +01:00
Sebastian Goscik
3750847055 Update bump2version to update uv.lock 2025-06-26 02:20:11 +01:00
Sebastian Goscik
c16a380918 Round download buffer size down to int 2025-06-26 02:20:11 +01:00
Sebastian Goscik
df466b5d0b Correct uv.lock UPB version 2025-06-26 02:20:11 +01:00
Sebastian Goscik
18a78863a7 Update issue templates 2025-06-09 23:15:48 +01:00
Sebastian Goscik
4d2002b98d Remove data volume from remote backup example 2025-06-07 10:26:33 +01:00
Sebastian Goscik
4b4cb86749 Bump version: 0.12.0 → 0.13.0 2025-04-09 13:01:45 +01:00
Sebastian Goscik
c091fa4f92 changelog 2025-04-09 13:01:45 +01:00
Sebastian Goscik
2bf90b6763 Update readme with parallel downloads 2025-04-09 11:27:45 +01:00
Sebastian Goscik
f275443a7a Fix issue with duplicated logging with parallel loggers 2025-04-09 11:25:34 +01:00
Sebastian Goscik
3a43c1b670 Enable multiple parallel uploaders 2025-04-09 11:25:34 +01:00
Sebastian Goscik
e0421c1dd1 Add all smart detection types 2025-04-09 02:37:10 +01:00
Sebastian Goscik
4ee70e6d4b Updating dev dependencies 2025-04-09 02:25:10 +01:00
Sebastian Goscik
ce2993624f Correct CAMERAS envvar 2025-04-09 02:12:52 +01:00
Sebastian Goscik
cec1f69d8d Bump uiprotect 2025-04-09 02:06:38 +01:00
Sebastian Goscik
c07fb30fff update pre-commit 2025-04-09 01:54:57 +01:00
Sebastian Goscik
1de9b9a757 [actions] Fix CRLF issue on windows 2025-04-09 01:51:29 +01:00
Sebastian Goscik
3ec69a7a97 [actions] Fix uv install on windows 2025-04-09 01:47:06 +01:00
Sebastian Goscik
855607fa29 Migrate project to use uv 2025-04-09 01:40:24 +01:00
Sebastian Goscik
e11828bd59 Update makfile to use ruff 2025-04-08 23:54:24 +01:00
Sebastian Goscik
7439ac9bda Bump version: 0.11.0 → 0.12.0 2025-01-18 18:23:33 +00:00
Sebastian Goscik
e3cbcc819e git github action python version parsing 2025-01-18 18:23:33 +00:00
Sebastian Goscik
ccb816ddbc fix bump2version config 2025-01-18 17:19:47 +00:00
Sebastian Goscik
9d2d6558a6 Changelog 2025-01-18 17:18:05 +00:00
Sebastian Goscik
3c5056614c Monkey patch in experimental downloader 2025-01-18 17:07:44 +00:00
Sebastian Goscik
1f18c06e17 Bump dependency versions 2025-01-18 17:07:44 +00:00
Sebastian Goscik
3181080bca Fix issue when --camera isnt specified
Click defaults options with multiple=true to an empty list not None if they are not provided
2025-01-18 16:43:02 +00:00
Wietse Wind
6e5d90a9f5 Add ability to INCLUDE specific cameras instead of EXCLUDE (#179)
Co-authored-by: Sebastian Goscik <sebastian.goscik@live.co.uk>
2025-01-18 15:12:55 +00:00
dependabot[bot]
475beaee3d Bump aiohttp from 3.10.10 to 3.10.11 in the pip group across 1 directory
Bumps the pip group with 1 update in the / directory: [aiohttp](https://github.com/aio-libs/aiohttp).


Updates `aiohttp` from 3.10.10 to 3.10.11
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.10.10...v3.10.11)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: indirect
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-06 21:02:05 +00:00
Wietse Wind
75cd1207b4 Fix iterating over empty events 2025-01-06 20:41:11 +00:00
Sebastian Goscik
c067dbd9f7 Filter out on-going events
Unifi Protect has started to return events that have not ended. These are now explicitly filtered out
2024-10-26 22:12:50 +01:00
Sebastian Goscik
2c43149c99 ruff formatting 2024-10-26 22:12:50 +01:00
Sebastian Goscik
78a2c3034d Bump uiprotect 2024-10-26 22:12:50 +01:00
jimmydoh
1bb8496b30 Adding support for SMART_DETECT_LINE events 2024-10-26 22:12:50 +01:00
Sebastian Goscik
80ad55d0d0 Simplified websocket reconnection logic
This is now handled automatically by uiprotect internally. We do not need to worry about this, greatly simplifying the logic here to just logging messages
2024-10-26 21:27:19 +01:00
jimmydoh
0b2c46888c Replace check_ws with subcription to websocket state 2024-10-26 21:27:19 +01:00
Jonathan Laliberte
0026eaa2ca #171 - Use exponential backoff when logging into Unifi API (#172) 2024-10-10 20:55:00 +00:00
Sebastian Goscik
c3290a223a Update 30-config
Fixed path in error message
2024-09-10 12:32:50 +01:00
Sebastian Goscik
4265643806 Update contribution guide setup steps 2024-08-10 00:38:42 +01:00
Sebastian Goscik
78be4808d9 mypy fixes 2024-08-10 00:17:55 +01:00
Sebastian Goscik
0a6a259120 remove twine dev dependency 2024-08-09 23:53:18 +01:00
Sebastian Goscik
de4f69dcb5 switch pre-commit to ruff 2024-08-09 23:49:11 +01:00
Sebastian Goscik
a7c4eb8dae remove editor config 2024-08-09 23:46:50 +01:00
Sebastian Goscik
129d89480e update git ignore 2024-08-09 23:46:09 +01:00
Sebastian Goscik
a7ccef7f1d ruff check 2024-08-09 23:45:21 +01:00
Sebastian Goscik
bbd70f49bf ruff format 2024-08-09 23:43:03 +01:00
Sebastian Goscik
f9d74c27f9 change linter to ruff 2024-08-09 23:39:54 +01:00
Sebastian Goscik
9d79890eff Update poetry lock 2024-08-09 23:38:46 +01:00
Lloyd Pickering
ccf2cde272 Switch to using UIProtect library (#160)
* Updated poetry dependencies to remove optional flags on dev/test

* file fixups from running poetry run tox

* Updated to Python 3.10

* Switched to UI Protect library

* Updated changelog

* Fix docker permissions

- Make scripts executable by everyone
- Correct XDG variable name to fix incorrect config path being used

* Revert "Updated poetry dependencies to remove optional flags on dev/test" and regenerated lock file
This reverts commit 432d0d3df7.

---------

Co-authored-by: Sebastian Goscik <sebastian.goscik@live.co.uk>
2024-08-09 22:16:19 +00:00
Sebastian Goscik
a8328fd09e Bump version: 0.10.7 → 0.11.0 2024-06-08 01:31:58 +01:00
Sebastian Goscik
28d241610b changelog 2024-06-08 01:31:21 +01:00
Sebastian Goscik
aa1335e73b Fix typos and add experimental downloader to README 2024-06-08 01:29:06 +01:00
Sebastian Goscik
9cb2ccf8b2 Update pyunifiprotect to point to my fork
This is done to accept in features that have not been merged into the upstream repo yet. This also allows for stability in the future.
2024-06-08 01:18:14 +01:00
Sebastian Goscik
30ea7de5c2 Add experimental downloader
This uses a new API to download events like the way the web ui does, where it first asks for a video to be prepared (on the unifi protect host) and then downloads it. This might be potentially more stable than the existing downloader.
2024-06-06 00:41:42 +01:00
Sebastian Goscik
2dac2cee23 TEMP: Switch to fork of pyunifiprotect
In order to test new functionality of a PR this commit temporarily changes the source of pyunifiprotect
2024-06-06 00:41:42 +01:00
Sebastian Goscik
f4d992838a Fix permissions issue with ufp/sessions.json in docker container
The python library `platformdirs` is detecting the user as root instead of the uid being set to execute UPB. This work around forces the session cache file to be placed in /config
2024-06-06 00:41:20 +01:00
Sebastian Goscik
9fe4394ee4 bump pyunifiprotect to 6.0.1 2024-05-27 23:05:19 +01:00
Sebastian Goscik
e65d8dde6c Bump version: 0.10.6 → 0.10.7 2024-03-23 00:18:57 +00:00
Sebastian Goscik
90108edeb8 Force using pyunifiprotect >= 5.0.1 2024-03-23 00:18:49 +00:00
Sebastian Goscik
1194e957a5 Bump version: 0.10.5 → 0.10.6 2024-03-22 22:50:20 +00:00
Sebastian Goscik
65128b35dd changelog 2024-03-22 22:50:14 +00:00
mmolitor87
64bb353f67 Bump pyunifiprotect to support protect 3.0.22 (#133) 2024-03-22 22:47:54 +00:00
Adrian Keenan
558859dd72 Update docs for ignoring cameras (#134)
* update docs

* remove docker from log scanning notes
2024-03-21 23:09:09 +00:00
Sebastian Goscik
d3b40b443a Bump version: 0.10.4 → 0.10.5 2024-02-24 16:19:22 +00:00
Sebastian Goscik
4bfe9afc10 Bump pyunifiprotect 2024-02-24 16:19:11 +00:00
Sebastian Goscik
c69a3e365a Bump version: 0.10.3 → 0.10.4 2024-01-26 19:49:36 +00:00
Sebastian Goscik
ace6a09bba changelong 2024-01-26 19:49:32 +00:00
Sebastian Goscik
e3c00e3dfa Update pyunifiprotect version 2024-01-26 19:47:44 +00:00
Sebastian Goscik
5f7fad72d5 Bump version: 0.10.2 → 0.10.3 2023-12-07 19:59:13 +00:00
Sebastian Goscik
991998aa37 changelog 2023-12-07 19:59:10 +00:00
Sebastian Goscik
074f5b372c bump pyunifiprotect version 2023-12-07 19:57:21 +00:00
Sebastian Goscik
00aec23805 Bump version: 0.10.1 → 0.10.2 2023-11-21 00:20:46 +00:00
Sebastian Goscik
52e4ecd50d changelog 2023-11-21 00:20:35 +00:00
Sebastian Goscik
6b116ab93b Fixed issue where duplicate events were being downloaded
Previously unifi would only end one update which contained the end time stamp
so it was sufficient to check if it existed in the new event data.
However, now it is possible to get update events after the end timestamp
has been set. With this change we now look for when the event change
data contains the end time stamp. So long as unifi does not change its
mind about when an event ends, this should solve the issue.
2023-11-21 00:18:36 +00:00
Sebastian Goscik
70526b2f49 Make default file path format use event start time 2023-11-21 00:08:24 +00:00
Sebastian Goscik
5069d28f0d Bump version: 0.10.0 → 0.10.1 2023-11-01 21:34:01 +00:00
Sebastian Goscik
731ab1081d changelog 2023-11-01 21:33:55 +00:00
Sebastian Goscik
701fd9b0a8 Fix event enum string conversion to value 2023-11-01 21:32:19 +00:00
Sebastian Goscik
5fa202005b Bump version: 0.9.5 → 0.10.0 2023-11-01 00:16:17 +00:00
Sebastian Goscik
3644ad3754 changelog 2023-11-01 00:15:54 +00:00
Sebastian Goscik
9410051ab9 Add feature to skip events longer than a maximum length 2023-11-01 00:11:49 +00:00
Sebastian Goscik
d5a74f475a failed rcat no longer writes to database 2023-10-31 23:37:52 +00:00
Sebastian Goscik
dc8473cc3d Fix bug with event chunking during initial ignore of events 2023-10-31 17:47:59 +00:00
Sebastian Goscik
60901e9a84 Fix crash caused by no events occurring in retention interval 2023-10-31 17:35:30 +00:00
Sebastian Goscik
4a0bd87ef2 Move docker base image to alpine edge to get latest rclone release 2023-10-31 17:32:43 +00:00
Sebastian Goscik
8dc0f8a212 Bump version: 0.9.4 → 0.9.5 2023-10-07 22:52:45 +01:00
Sebastian Goscik
34252c461f changelog 2023-10-07 22:52:17 +01:00
Sebastian Goscik
acc405a1f8 Chunk event query to prevent crashing unifi protect 2023-10-07 22:50:04 +01:00
Sebastian Goscik
b66d40736c Bump dependency versions 2023-10-07 21:49:46 +01:00
cyberpower678
171796e5c3 Update unifi_protect_backup_core.py (#100)
Fix typo in connection attempts.  The application only attempts to connect once instead of 10 times.
2023-09-08 16:27:09 +01:00
Sebastian Goscik
cbc497909d linting 2023-07-29 12:07:31 +01:00
Sebastian Goscik
66b3344e29 Add download rate limiter 2023-07-29 12:07:31 +01:00
Sebastian Goscik
89cab64679 Add validation of retention/purge interval 2023-07-29 12:06:54 +01:00
Sebastian Goscik
f2f1c49ae9 Bump version: 0.9.3 → 0.9.4 2023-07-29 11:32:32 +01:00
Sebastian Goscik
8786f2ceb0 Fixed time period parsing
Also updated link to rclone docs to be more direct to the format docs
2023-07-29 11:32:32 +01:00
Sebastian Goscik
1f2a48f95e Bump version: 0.9.2 → 0.9.3 2023-07-08 16:56:23 +01:00
Sebastian Goscik
5d2391e005 Remove Arm v7 docker builds
See: https://www.linuxserver.io/blog/a-farewell-to-arm-hf
2023-07-08 16:55:12 +01:00
Sebastian Goscik
c4e9a42c1a Block all calls to protect client when the connection is
dropped and we are awaiting a reconnect
2023-07-08 16:30:09 +01:00
Sebastian Goscik
6c719c0162 Cache camera names
so an active protect connection is not need to perform actions like
uploads which don't rely on protect.
2023-07-08 15:32:47 +01:00
Sebastian Goscik
498f72a09b Bump version: 0.9.1 → 0.9.2 2023-05-24 00:45:00 +01:00
Sebastian Goscik
d0080a569b Changelog 2023-05-24 00:44:54 +01:00
Sebastian Goscik
f89388327f Fix missing event checker not ignoring unwanted cameras 2023-05-22 23:22:41 +01:00
Sebastian Goscik
0a7eb92a36 Bump version: 0.9.0 → 0.9.1 2023-04-29 09:51:33 +01:00
Sebastian Goscik
694e9c6fde updated changelog 2023-04-29 09:50:55 +01:00
Sebastian Goscik
63fdea402d Linting fixes 2023-04-29 09:49:27 +01:00
Sebastian Goscik
f4c3c68f0d Fixed download failure counting
previously it would only count as a failure if the download "succeeded" but was None
2023-04-29 09:48:46 +01:00
Igor Wolbers
e5112de35c Add extra param to purge (#86)
* Added optional argument string to pass directly to the `rclone delete` command used to purge video files. This will allow for immediate deletion of files on destinations where the file might otherwise go to a recycle bin by default.

---------

Co-authored-by: Igor Wolbers <igor@sparcobv.onmicrosoft.com>
Co-authored-by: Sebastian Goscik <sebastian.goscik@live.co.uk>
2023-04-29 08:19:41 +00:00
Sebastian Goscik
1b38cb3db3 Fix typo in readme 2023-04-26 10:20:22 +01:00
Sebastian Goscik
237d7ceeb1 Merge pull request #83 from IgorWolbers/add-service-documentation 2023-04-03 11:14:23 +00:00
Sebastian Goscik
6b1066d31e Log when an error occurs trying to add a notifier 2023-04-02 23:15:47 +01:00
Sebastian Goscik
798139a182 Fix arm v7 build 2023-03-24 15:22:20 +00:00
Sebastian Goscik
9def99ff97 linter fixes 2023-03-24 15:06:10 +00:00
Igor Wolbers
8d3ee5bdfd Running Backup Tool as a Service (LINUX ONLY) 2023-03-24 08:56:01 -04:00
Igor Wolbers
c6584759d9 Fixed the docker run command example which
had a ` instead of a '. This caused the command to never
terminate whene executing.
2023-03-24 12:23:03 +00:00
Sebastian Goscik
b46c9485c8 Bump version: 0.8.8 → 0.9.0 2023-03-24 12:22:32 +00:00
Sebastian Goscik
561ce181ea changelog 2023-03-24 12:22:32 +00:00
Sebastian Goscik
cec323f803 Make download failure assertion more specific 2023-03-24 12:18:09 +00:00
Sebastian Goscik
89fe672693 Add ability to ignore events that keep failing 2023-03-24 12:17:44 +00:00
Sebastian Goscik
c55f50153f isort 2023-03-24 11:17:17 +00:00
Sebastian Goscik
144938f7e5 Fix error log when no notifiers are setup 2023-03-24 11:16:37 +00:00
Sebastian Goscik
782d126ae5 Add ability to skip missing events at launch 2023-03-24 01:02:58 +00:00
Sebastian Goscik
0d3395b74a Fix tasks being started prematurely 2023-03-24 00:50:42 +00:00
Sebastian Goscik
d9af6a03a5 fix isort induced circular import 2023-03-08 00:35:37 +00:00
Sebastian Goscik
48f743bc8e flake8 & mypy fixes 2023-03-08 00:03:26 +00:00
Sebastian Goscik
6121f74a80 remove pylint dependency 2023-03-07 00:53:07 +00:00
Sebastian Goscik
07c2278428 isort 2023-03-07 00:42:49 +00:00
Sebastian Goscik
1ff59773f1 Tidy poetry files 2023-03-07 00:41:49 +00:00
Sebastian Goscik
08f2674497 Stop apprise errors from preventing regular logging 2023-03-07 00:17:18 +00:00
Sebastian Goscik
818f2eb5b3 Reclassify log messages 2023-03-07 00:16:19 +00:00
Sebastian Goscik
dfdc85001c color logging no longer uses global variable 2023-03-07 00:16:19 +00:00
Sebastian Goscik
22d20c9905 Add star graph 2023-02-26 00:09:48 +00:00
Sebastian Goscik
86963fb0ff Add apprise env var to readme 2023-02-26 00:05:42 +00:00
Sebastian Goscik
93e8e1a812 Update poetry.lock 2023-02-26 00:00:36 +00:00
Sebastian Goscik
fb1f266eae Refactor logging customisations into custom handler 2023-02-26 00:00:25 +00:00
Sebastian Goscik
ce34afaf06 Add the ability to send logging output to apprise 2023-02-25 20:51:35 +00:00
Sebastian Goscik
6b60fac3c1 Log main loop exception
and allow time for other tasks to finish before closing the program
2023-02-25 20:51:18 +00:00
Sebastian Goscik
73022fddf1 Simplify exception logging 2023-02-25 20:51:18 +00:00
Sebastian Goscik
900d0d2881 Re-try connecting to unifi protect if initial connection fails 2023-02-25 20:51:18 +00:00
Sebastian Goscik
f7e43b8e95 Add notes about reducing disk wear 2023-02-25 12:26:15 +00:00
Sebastian Goscik
cf7229e05f Restructure readme 2023-02-25 12:18:22 +00:00
Sebastian Goscik
4798b3d269 fix module and package name clash 2023-01-16 13:20:29 +00:00
Sebastian Goscik
5b50b8144b remove stray print 2023-01-16 12:41:57 +00:00
Sebastian Goscik
965dde53f6 Merge pull request #70 from darron/main
Build for arm/v7, add makefile target, adjust Github Action.
2023-01-11 11:53:20 +00:00
Darron Froese
3677e4a86f Build for arm/v7, add makefile target, adjust Github Action. 2023-01-01 14:32:58 -07:00
Sebastian Goscik
3540ec1d04 Bump version: 0.8.7 → 0.8.8 2022-12-30 13:16:45 +00:00
Sebastian Goscik
8ed60aa925 Made purge interval configurable and default back to once a day 2022-12-30 13:14:31 +00:00
Sebastian Goscik
ca455ebcd0 Bump version: 0.8.6 → 0.8.7 2022-12-11 13:46:52 +00:00
Sebastian Goscik
16315ca23c Fix improper unpacking of upload events 2022-12-11 13:36:40 +00:00
Sebastian Goscik
ac0f6f5fcb Bump version: 0.8.5 → 0.8.6 2022-12-10 06:59:45 +00:00
Sebastian Goscik
0c34294b7e clear current event after upload/download 2022-12-10 06:44:56 +00:00
Sebastian Goscik
f195b8a4a4 Fix ignoring missing event before one has started downloading/uploading 2022-12-10 06:35:38 +00:00
Sebastian Goscik
645e339314 Bump version: 0.8.4 → 0.8.5 2022-12-09 23:20:05 +00:00
Sebastian Goscik
13c5b630d4 fix using event instead of event id in set to exclude missing events 2022-12-09 23:19:38 +00:00
Sebastian Goscik
44867e7427 Bump version: 0.8.3 → 0.8.4 2022-12-09 11:15:07 +00:00
Sebastian Goscik
0978798078 Fix uploading files not being accounted for when checking for missing events 2022-12-09 11:12:08 +00:00
Sebastian Goscik
8e3ea2b13f Log buffer size in human readable format 2022-12-09 11:12:08 +00:00
Sebastian Goscik
8a67311fda show default buffer size in command help 2022-12-08 12:40:32 +00:00
Sebastian Goscik
8aedb35c45 Update readme 2022-12-08 12:40:19 +00:00
Sebastian Goscik
4eed1c01c4 Bump version: 0.8.2 → 0.8.3 2022-12-08 12:22:47 +00:00
Sebastian Goscik
a4091699a1 Fix setting no verbosity for the docker container 2022-12-08 12:12:54 +00:00
Sebastian Goscik
58eb1fd8a7 Added event ID to uploader/downloader logging
Also fixed issue where logging outside of unifi_protect_backup was not adding colors
2022-12-08 12:04:36 +00:00
Sebastian Goscik
bba96e9d86 Make video download buffer size configurable 2022-12-08 00:15:11 +00:00
Sebastian Goscik
dd69a18dbf Raise an error when trying to add a video larger than the buffer 2022-12-08 00:14:08 +00:00
Sebastian Goscik
3510a50d0f remove unused asyncio loop 2022-12-07 23:25:41 +00:00
Sebastian Goscik
3e0044cd80 Make color logging optional
Returns to the previous default mode of plain logging but allows color logging to be enabled
2022-12-06 00:57:05 +00:00
Sebastian Goscik
1b3d196672 Add timezone info to debug log 2022-12-06 00:57:05 +00:00
Sebastian Goscik
c22819c04d Correct missing event logging for smart detections 2022-12-06 00:57:05 +00:00
Sebastian Goscik
ac66f4eaab Reduce log spam from missing events unless using extra_debug 2022-12-06 00:57:05 +00:00
Sebastian Goscik
34bc37bd0b Bump version: 0.8.1 → 0.8.2 2022-12-05 14:27:11 +00:00
Sebastian Goscik
f15cdf9a9b updated changelog 2022-12-05 14:27:06 +00:00
Sebastian Goscik
63d368f14c Added note to readme about 0.8 docker changes 2022-12-05 14:24:27 +00:00
Sebastian Goscik
ee01edf55c Make sure config directories exist in the container 2022-12-05 14:04:43 +00:00
Sebastian Goscik
4e10e0f10e Use run_command in downloader and uploader 2022-12-05 14:03:59 +00:00
Sebastian Goscik
385f115eab Add ability for run_command to pass data to stdin 2022-12-05 14:03:23 +00:00
Sebastian Goscik
b4062d3b53 Fix issue where indented stdout/stderr was being returned
The indentation was supposed to be only for the logging to make it easier to read but was also being returned, thus breaking parsing of the command output

Fixes #60
2022-12-05 13:40:32 +00:00
Sebastian Goscik
7bfcb548e2 Bump version: 0.8.0 → 0.8.1 2022-12-04 12:04:15 +00:00
Sebastian Goscik
a74e4b042d changelog 2022-12-04 12:03:57 +00:00
Sebastian Goscik
2c5308aa20 updated name in pyproject.toml 2022-12-04 12:03:54 +00:00
Sebastian Goscik
9d375d4e7b update bumpversion cfg to use new tar.gz name 2022-12-04 11:59:36 +00:00
Sebastian Goscik
df4390688b Update docs and dockerfile to save events database 2022-12-03 22:40:40 +00:00
Sebastian Goscik
3acfd1f543 Fix dockerfile - to _
I have no idea how this worked before but not now
2022-12-03 22:04:50 +00:00
Sebastian Goscik
49c11c1872 Make ci show all temp files 2022-12-03 22:00:22 +00:00
Sebastian Goscik
93cf297371 Bump version: 0.7.4 → 0.8.0 2022-12-03 21:54:45 +00:00
Sebastian Goscik
8baa413a23 Merge pull request #57 from ep1cman/restructure
Major Restructure
2022-12-03 21:51:20 +00:00
Sebastian Goscik
471ecb0662 Major Restructure
- Each task is now its own class
- Added a database to track backed up events and their destinations
- Added task to check for and backup missed events
2022-12-03 21:48:44 +00:00
Sebastian Goscik
031d4e4862 Update dev.yml
Do not trigger dev pipeline on pull requests
2022-08-24 15:28:48 +01:00
Sebastian Goscik
f109ec2a48 Bump version: 0.7.3 → 0.7.4 2022-08-21 20:51:08 +01:00
Sebastian Goscik
6a8bb39b63 Change rclone config command to use this container instead of a separate rclone container 2022-08-21 20:51:08 +01:00
Sebastian Goscik
49ddb081a8 Added rclone debugging instructions 2022-08-21 20:51:08 +01:00
Sebastian Goscik
941c92142f Fixed rclone.conf path in back to cloud example 2022-08-21 20:51:08 +01:00
Sebastian Goscik
150d8e6f49 Update CI flows to build arm64 containers 2022-08-21 20:51:08 +01:00
Sebastian Goscik
5ae43f08af Bump version: 0.7.2 → 0.7.3 2022-07-31 11:35:25 +01:00
Sebastian Goscik
0a36102eed Fixed dockerfile for pyunifiprotect 4.0.0
As of pyunifiprotect 4.0.0, a rust based library is needed.
In order for this to install correctly, cargo is needed, and alpine
needed to be bumped to 3.16.
2022-07-31 11:24:30 +01:00
Sebastian Goscik
92be1cea5d Bump pyunifiprotect 2022-07-31 01:48:04 +01:00
Sebastian Goscik
1813bc0176 Bump version: 0.7.1 → 0.7.2 2022-07-17 20:04:03 +01:00
Sebastian Goscik
9451fb4235 Bump pyunifiprotect -> v3.9.2 2022-07-16 23:37:36 +01:00
Sebastian Goscik
6fe18a193b Bump version: 0.7.0 → 0.7.1 2022-06-08 02:43:54 +01:00
Sebastian Goscik
f3a8bf6957 Updated issue template to have more questions 2022-06-07 22:52:19 +01:00
Sebastian Goscik
cb93ec7c6e Updated account setup instructions 2022-06-07 22:50:29 +01:00
Sebastian Goscik
f82e6064e7 Bump dependency versions 2022-06-07 22:13:17 +01:00
Sebastian Goscik
6aac1aadab Added instructions on local user account creation 2022-04-17 11:31:30 +01:00
Sebastian Goscik
13b11359fa Bump version: 0.6.0 → 0.7.0 2022-03-26 16:28:58 +00:00
Sebastian Goscik
540ad6e9f6 Updated changelog 2022-03-26 16:28:51 +00:00
Sebastian Goscik
912433e640 Merge pull request #33 from ep1cman/dev
Add ability to change clip file structure via template
2022-03-26 16:25:07 +00:00
Sebastian Goscik
f4a0c2bdcd Merge pull request #32 from ircmaxell/fix_disconnect_handling
Skip DISCONNECT events
2022-03-26 16:20:51 +00:00
Anthony Ferrara
f2c9ee5c76 Skip unwanted event types
e.g DISCONNECT events never have video associated with them, skip
processing if we encounter events of types we are not interested in.

Co-authored-by: Sebastian Goscik <sebastian.goscik@live.co.uk>
2022-03-26 16:15:36 +00:00
Sebastian Goscik
53ab3dc432 Fix typos 2022-03-26 00:17:34 +00:00
Sebastian Goscik
381f90f497 Add the ability to change the way the clip files are structured 2022-03-26 00:05:23 +00:00
Sebastian Goscik
af8ca90356 Adjusting typing dependencies to fix CI 2022-03-18 22:51:48 +00:00
Sebastian Goscik
189450e590 Make dev depenedencies optional 2022-03-18 22:40:23 +00:00
Sebastian Goscik
3f55fa5fdb Reduced size of docker container 2022-03-18 22:39:59 +00:00
Sebastian Goscik
52e72a7425 Bump version: 0.5.3 → 0.6.0 2022-03-18 21:44:32 +00:00
Sebastian Goscik
003e6eb990 Updated changelog 2022-03-18 21:44:32 +00:00
Sebastian Goscik
8bebeceaa6 Linter fixes 2022-03-18 21:44:32 +00:00
Sebastian Goscik
e2eb7858da Added support for doorbell ring events 2022-03-18 21:44:32 +00:00
Sebastian Goscik
453fed6c57 Added ability to choose which event types to backup
Co-authored-by: J3n50m4t <j3n50m4t@j3n50m4t.com>
2022-03-18 21:42:32 +00:00
Sebastian Goscik
ae323e68aa Actually assign new timestamps with proper timezones 2022-03-18 18:32:30 +00:00
Sebastian Goscik
4eec2fdde0 Bump version: 0.5.2 → 0.5.3 2022-03-11 23:10:53 +00:00
Sebastian Goscik
d31b9bffc6 Updated changelog 2022-03-11 23:10:34 +00:00
Sebastian Goscik
0a4a2401be Now uses timezone of the NVR for all timestamps 2022-03-10 22:35:14 +00:00
Sebastian Goscik
3c3c47b3b4 Update instructions for using the container to match new config 2022-03-10 21:07:02 +00:00
Sebastian Goscik
51e2446e44 Bump version: 0.5.1 → 0.5.2 2022-03-10 19:33:47 +00:00
Sebastian Goscik
5f8ae03d7a Updated changelog 2022-03-10 19:33:40 +00:00
Sebastian Goscik
92bb362f2b Changed quotes in delete coomand to " 2022-03-10 19:32:30 +00:00
Sebastian Goscik
401031dc2f Fixed dockerfile tar.gz version 2022-03-08 21:48:45 +00:00
Noel Madali
24e508bf69 * Adopted linuxserver container pattern
* Clean up Dockerfile
* Use default rclone.conf and add check if doesn't exist (from docker_user)
2022-03-08 21:19:06 +00:00
Sebastian Goscik
71c86714c1 Bump version: 0.5.0 → 0.5.1 2022-03-07 22:39:20 +00:00
Sebastian Goscik
7ee34c1c6a Update changelog 2022-03-07 22:39:14 +00:00
Sebastian Goscik
5bd4a35d5d Change ' quotes to " in rclone command
' does not work as expected in windows
2022-03-07 22:38:12 +00:00
Sebastian Goscik
298f500811 Bump version: 0.4.0 → 0.5.0 2022-03-06 18:18:59 +00:00
Sebastian Goscik
0125b6d21a Updated changelog 2022-03-06 18:18:59 +00:00
Sebastian Goscik
04694712d8 Added feature to check duration of downloaded clips if ffprobe is present 2022-03-06 18:18:59 +00:00
Sebastian Goscik
e3ed8ef303 Added delay before downloading clips
Unifi protect does not return full video clips if the clip is requested too soon.
There are two issues at play here:
  - Protect will only cut a clip on an keyframe which happen every 5s
  - Protect's pipeline needs a finite amount of time to make a clip available

Known Issues: It still seems to sometimes miss a single frame
2022-03-06 18:03:27 +00:00
Sebastian Goscik
43dd561d81 Rename RCloneException to more general SubprocessException 2022-03-06 17:59:00 +00:00
Sebastian Goscik
ad6b4dc632 Bump version: 0.3.1 → 0.4.0 2022-03-05 15:00:11 +00:00
Sebastian Goscik
a268ad652a updated changelog 2022-03-05 14:59:55 +00:00
Sebastian Goscik
2b46b5bd4a Added --version
Implements #15
2022-03-05 14:50:54 +00:00
Sebastian Goscik
9e164de686 Demote websocket retry logging
Previously `-v` showed a lot of spam meesaged for each time the check
was done, this is not particularly useful.
2022-02-24 23:54:29 +00:00
Sebastian Goscik
78e7b8fbb0 Bump version: 0.3.0 → 0.3.1 2022-02-24 21:24:16 +00:00
Sebastian Goscik
76a0591beb changelog 2022-02-24 21:24:06 +00:00
Sebastian Goscik
15e0ae5f4d Merge pull request #13 from Sticklyman1936/check_ws_and_reconnect
Periodically check for websocket disconnect and re-init
2022-02-24 21:16:01 +00:00
Sascha Bischoff
c9634ba10a Periodically check for websocket disconnect and re-init
Both network issues and restarts of Unifi Protect can cause the
websocket to disconnect. Once this happens, no more events are
recieved, and hence no events are stored via rclone.

We add a task which checks that the websocket is connected every
minute. If the websocket is not connected, the connection is totally
reset. For a simple network issue, is should be sufficient to just
call pyunifiprotect's update(), but this doesn't work when protect has
been restarted. Given that this is a tool that should always be
running, we opt for the most extreme option of totally resetting the
connection, and re-establishing it from scratch.
2022-02-24 18:54:24 +00:00
Sebastian Goscik
e3fbb1be10 Bump version: 0.2.1 → 0.3.0 2022-02-22 23:40:36 +00:00
Sebastian Goscik
47c9338fe5 Changelog 2022-02-22 23:40:24 +00:00
Sebastian Goscik
48042aee04 Added clarifications to contribution guide
- Remove mention od docs since those were removed
- Clarified how to run the application via poetry
2022-02-22 23:37:00 +00:00
Sebastian Goscik
e56a38b73f CI: Prevent building dev docker on pull requests 2022-02-22 23:37:00 +00:00
Sebastian Goscik
3e53d43f95 Add timeout to known download exceptions 2022-02-22 23:36:57 +00:00
Sebastian Goscik
90e50fd982 Fix: Properly handle unknown IDs
Today after adding a new camera for testing, it became
clear that the previous assumption that pyunifiprotect
would update its bootstrap when new cameras were
added was incorrect.
2022-02-22 23:36:30 +00:00
Sebastian Goscik
0a2c0aa326 Merge pull request #11 from Sticklyman1936/rclone_bw_limit
Add option to supply extra args to rclone
2022-02-22 16:15:32 +00:00
Sascha Bischoff
9f6ec7628c Add option to supply extra arguments to rclone
Add in the capability to pass extra arguments through to rclone. These
are passed verbatim, and are set to '' by default. They can be passed
either with --rclone-args or by setting the environment variable
RCLONE_ARGS.

For example. the expectation is that the end user can use these for
setting a bandwidth limit so that rclone uploading doesn't saturate
their internet bandwidth.
2022-02-22 15:25:27 +00:00
35 changed files with 5086 additions and 2825 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.2.1
current_version = 0.13.1
commit = True
tag = True
@@ -7,10 +7,14 @@ tag = True
search = version = "{current_version}"
replace = version = "{new_version}"
[bumpversion:file:uv.lock]
search = version = "{current_version}"
replace = version = "{new_version}"
[bumpversion:file:unifi_protect_backup/__init__.py]
search = __version__ = '{current_version}'
replace = __version__ = '{new_version}'
search = __version__ = "{current_version}"
replace = __version__ = "{new_version}"
[bumpversion:file:Dockerfile]
search = COPY dist/unifi-protect-backup-{current_version}.tar.gz sdist.tar.gz
replace = COPY dist/unifi-protect-backup-{new_version}.tar.gz sdist.tar.gz
search = COPY dist/unifi_protect_backup-{current_version}.tar.gz sdist.tar.gz
replace = COPY dist/unifi_protect_backup-{new_version}.tar.gz sdist.tar.gz

View File

@@ -1,24 +0,0 @@
# http://editorconfig.org
root = true
[*]
indent_style = space
indent_size = 4
trim_trailing_whitespace = true
insert_final_newline = true
charset = utf-8
end_of_line = lf
[*.bat]
indent_style = tab
end_of_line = crlf
[LICENSE]
insert_final_newline = false
[Makefile]
indent_style = tab
[*.{yml, yaml}]
indent_size = 2

View File

@@ -1,6 +1,8 @@
* Unifi Protect Backup version:
* Unifi Protect version:
* Python version:
* Operating System:
* Are you using a docker container or native?:
### Description

26
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,26 @@
---
name: Bug report
about: Create a report to help UPB improve
title: ''
labels: ''
assignees: ''
---
* Unifi Protect Backup version:
* Unifi Protect version:
* Python version:
* Operating System:
* Are you using a docker container or native?:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```

View File

@@ -1,96 +1,110 @@
# This is a basic workflow to help you get started with Actions
name: Test and Build
name: dev workflow
env:
IMAGE_NAME: ${{ github.repository }}
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master, main, dev ]
branches-ignore:
- main
pull_request:
branches: [ master, main, dev ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "test"
test:
# The type of runner that the job will run on
strategy:
matrix:
python-versions: [3.9]
os: [ubuntu-18.04, macos-latest, windows-latest]
python-versions: ["3.10", "3.11", "3.12", "3.13"]
os: [ubuntu-latest, macos-latest, windows-latest]
runs-on: ${{ matrix.os }}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- name: Configure Git to maintain line endings
run: |
git config --global core.autocrlf false
git config --global core.eol lf
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-versions }}
- name: Install dependencies
- name: Install uv (Unix)
if: runner.os != 'Windows'
run: |
python -m pip install --upgrade pip
pip install poetry tox tox-gh-actions
curl -LsSf https://astral.sh/uv/install.sh | sh
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: test with tox
run:
tox
- name: Install uv (Windows)
if: runner.os == 'Windows'
run: |
iwr -useb https://astral.sh/uv/install.ps1 | iex
echo "$HOME\.cargo\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: list files
run: ls -l .
- uses: codecov/codecov-action@v1
with:
fail_ci_if_error: true
files: coverage.xml
- name: Install dev dependencies
run: |
uv sync --dev
- name: Run pre-commit
run: uv run pre-commit run --all-files
- name: Run pytest
run: uv run pytest
- name: Build
run: uv build
dev_container:
name: Create dev container
runs-on: ubuntu-20.04
strategy:
matrix:
python-versions: [3.9]
# Steps represent a sequence of tasks that will be executed as part of the job
name: Create dev container
needs: test
if: github.ref == 'refs/heads/dev'
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-versions }}
python-version: '3.12'
- name: Install dependencies
- name: Install uv (Unix)
if: runner.os != 'Windows'
run: |
python -m pip install --upgrade pip
pip install poetry tox tox-gh-actions
curl -LsSf https://astral.sh/uv/install.sh | sh
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Build wheels and source tarball
run: >-
poetry build
- name: build container
id: docker_build
run: docker build . --file Dockerfile --tag $IMAGE_NAME --label "runnumber=${GITHUB_RUN_ID}"
- name: log in to container registry
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: push container image
- name: Install uv (Windows)
if: runner.os == 'Windows'
run: |
IMAGE_ID=ghcr.io/$IMAGE_NAME
iwr -useb https://astral.sh/uv/install.ps1 | iex
echo "$HOME\.cargo\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:dev
docker push $IMAGE_ID:dev
- name: Build
run: uv build
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ghcr.io/${{ github.repository }}:dev

View File

@@ -1,50 +0,0 @@
# This is a basic workflow to help you get started with Actions
name: stage & preview workflow
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master, main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
publish_dev_build:
runs-on: ubuntu-latest
strategy:
matrix:
python-versions: [ 3.9 ]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-versions }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install poetry tox tox-gh-actions
- name: test with tox
run:
tox
- name: Build wheels and source tarball
run: |
poetry version $(poetry version --short)-dev.$GITHUB_RUN_NUMBER
poetry version --short
poetry build
- name: publish to Test PyPI
uses: pypa/gh-action-pypi-publish@master
with:
user: __token__
password: ${{ secrets.TEST_PYPI_API_TOKEN}}
repository_url: https://test.pypi.org/legacy/
skip_existing: true

View File

@@ -1,41 +1,27 @@
# Publish package on main branch if it's tagged with 'v*'
name: release & publish workflow
name: Release & Publish Workflow
# Controls when the action will run.
on:
# Triggers the workflow on push events but only for the master branch
push:
tags:
- 'v*'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
IMAGE_NAME: ${{ github.repository }}
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "release"
release:
name: Create Release
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
strategy:
matrix:
python-versions: [3.9]
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: Get version from tag
id: tag_name
run: |
echo ::set-output name=current_version::${GITHUB_REF#refs/tags/v}
echo "current_version=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
shell: bash
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
- name: Checkout code
uses: actions/checkout@v4
- name: Get Changelog Entry
id: changelog_reader
@@ -44,62 +30,57 @@ jobs:
version: ${{ steps.tag_name.outputs.current_version }}
path: ./CHANGELOG.md
- uses: actions/setup-python@v2
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-versions }}
python-version: "3.10"
- name: Install dependencies
- name: Install uv
run: |
python -m pip install --upgrade pip
pip install poetry
curl -LsSf https://astral.sh/uv/install.sh | sh
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Build wheels and source tarball
run: >-
poetry build
run: uv build
- name: show temporary files
run: >-
ls -l
- name: Show build artifacts
run: ls -lR dist/
- name: build container
id: docker_build
run: docker build . --file Dockerfile --tag $IMAGE_NAME --label "runnumber=${GITHUB_RUN_ID}"
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: log in to container registry
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: create github release
- name: Log in to container registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push container
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
ghcr.io/${{ github.repository }}:${{ steps.tag_name.outputs.current_version }}
ghcr.io/${{ github.repository }}:latest
- name: Create GitHub release
id: create_release
uses: softprops/action-gh-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
body: ${{ steps.changelog_reader.outputs.changes }}
files: dist/*.whl
files: dist/*
draft: false
prerelease: false
- name: push container image
run: |
IMAGE_ID=ghcr.io/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
# Strip git ref prefix from version
VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# Strip "v" prefix from tag name
[[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
# Use Docker `latest` tag convention
[ "$VERSION" == "master" ] && VERSION=latest
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:latest
docker push $IMAGE_ID:$VERSION
docker push $IMAGE_ID:latest
- name: publish to PyPI
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
user: __token__

7
.gitignore vendored
View File

@@ -113,5 +113,12 @@ ENV/
# mkdocs build dir
site/
# Docker mounted volumes
config/
data/
.envrc
clips/
*.sqlite
.tool-versions
docker-compose.yml

View File

@@ -5,32 +5,26 @@ repos:
- id: forbid-crlf
- id: remove-crlf
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
rev: v5.0.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-merge-conflict
- id: check-yaml
args: [ --unsafe ]
- repo: https://github.com/pre-commit/mirrors-isort
rev: v5.8.0
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.11.4
hooks:
- id: isort
args: [ "--filter-files" ]
- repo: https://github.com/ambv/black
rev: 21.5b1
hooks:
- id: black
language_version: python3.9
- repo: https://github.com/pycqa/flake8
rev: 3.9.2
hooks:
- id: flake8
additional_dependencies: [ flake8-typing-imports==1.10.0 ]
# Run the linter.
- id: ruff
# Run the formatter.
- id: ruff-format
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.901
rev: v1.14.1
hooks:
- id: mypy
exclude: tests/
additional_dependencies:
- types-click
- types-pytz
- types-cryptography
- types-python-dateutil
- types-aiofiles

View File

@@ -4,6 +4,250 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.13.1] - 2025-06-26
### Fixed
- Bumped uiprotect version to support unifi protect 6
## [0.13.0] - 2025-04-09
### Added
- Parallel uploaders are now supported
- All smart detection types are now supported
- Migrated the project from poetry to uv
### Fixed
- Corrected the envar for setting cameras to backup for ONLY_CAMERAS -> CAMERAS
- Bumped to the latest uiprotect library to fix issue when unifi access devices are present
## [0.12.0] - 2025-01-18
### Added
- Tool now targets UIProtect instead of pyunifiprotect which should help any lingering auth issues with Unifi OS 4.X
- Python Version bumped to 3.10 (based on UIProtect need)
- The ability to specify only specific cameras to backup
- Re-enabled the experimental downloader after adding a monkey patch for UIProtect to include the unmerged code
- Switched linter to `ruff`
- Added support for SMART_DETECT_LINE events
-
### Fixed
- Unifi now returns unfinished events, this is now handled correctly
- Login attempts now use an exponentially increasing delay to try work around aggressive rate limiting on logins
## [0.11.0] - 2024-06-08
### Added
- A new experimental downloader that uses the same mechanism the web ui does. Enable with
`--experimental-downloader`
### Fixed
- Support for UniFi OS 4.x.x
## [0.10.7] - 2024-03-22
### Fixed
- Set pyunifiprotect to a minimum version of 5.0.0
## [0.10.6] - 2024-03-22
### Fixed
- Bumped `pyunifiprotect` version to fix with versions of Unifi Protect after 3.0.10
## [0.10.5] - 2024-01-26
### Fixed
- Bumped `pyunifiprotect` version to fix issue with old version of yarl
## [0.10.4] - 2024-01-26
### Fixed
- Bumped `pyunifiprotect` version to fix issue caused by new video modes
## [0.10.3] - 2023-12-07
### Fixed
- Bumped `pyunifiprotect` version to fix issue caused by unifi protect returning invalid UUIDs
## [0.10.2] - 2023-11-21
### Fixed
- Issue where duplicate events were being downloaded causing database errors
- Default file path format now uses event start time instead of event end time which makes more logical sense
## [0.10.1] - 2023-11-01
### Fixed
- Event type enum conversion string was no longer converting to the enum value, this is now done explicitly.
## [0.10.0] - 2023-11-01
### Added
- Command line option to skip events longer than a given length (default 2 hours)
- Docker image is now based on alpine edge giving access to the latest version of rclone
### Fixed
- Failed uploads no longer write to the database, meaning they will be retried
- Fixed issue with chunked event fetch during initial ignore of events
- Fixed error when no events were fetched for the retention period
## [0.9.5] - 2023-10-07
### Fixed
- Errors caused by latest unifi protect version by bumping the version of pyunifiprotect used
- Queries for events are now chunked into groups of 500 which should help stop this tool crashing large
unifi protect instances.
## [0.9.4] - 2023-07-29
### Fixed
- Time period parsing, 'Y' -> 'y'
## [0.9.3] - 2023-07-08
### Fixed
- Queued up downloads etc now wait for dropped connections to be re-established.
## [0.9.2] - 2023-04-21
### Fixed
- Missing event checker ignoring the "ignored cameras" list
## [0.9.1] - 2023-04-21
### Added
- Added optional argument string to pass directly to the `rclone delete` command used to purge video files
### Fixed
- Fixed download errors not counting as failures
## [0.9.0] - 2023-03-24
### Added
- The ability to send logging out via apprise notifications
- Color logging is now optional
- Events are now permanently ignored if they fail to download 10 times
## [0.8.8] - 2022-12-30
### Added
- Added ability to configure purge interval
### Fixed
- Purge interval returned to previous default of once a day
## [0.8.7] - 2022-12-11
### Fixed
- Fix improper unpacking of upload events
## [0.8.6] - 2022-12-10
### Fixed
- check that current event is not none before trying to get it its ID
- downloader/uploaded clear their current event once its been processed
## [0.8.5] - 2022-12-09
### Fixed
- use event ID of currently up/downloading event, not whole event object when checking missing events
## [0.8.4] - 2022-12-09
### Added
- Logging of remaining upload queue size
### Fixed
- Uploading files were not accounted for when checking for missing events
- Buffer size parameter is logged in human-readable format
## [0.8.3] - 2022-12-08
### Added
- Now logs time zone settings for both the host and NVR
- Color logging is now optional and defaults to disabled (to match previous behavior before v0.8.0)
- Ability to configure download buffer size (bumped default up to 512MiB)
- Event IDs to upload/download logging
### Fixed
- Log spam when lots of events are missing, this will now only occur if the logging level is set to `EXTRA_DEBUG` (-vv)
- corrected logging not showing smart detection types
- The application no longer stalls when a video is downloaded larger than the available buffer size
- Ability to set the least verbose logging for the docker container
## [0.8.2] - 2022-12-05
### Fixed
- Fixed issue where command output was being returned with added indentation intended for logging only
- Fixed issue where some command logging was not indented
- Fixed issue where the tool could crash when run in a container if /config/database didn't exist
## [0.8.1] - 2022-12-04
version 0.8.0 was used by accident previously and PyPI would not accept it so bumping by one patch version
## [0.8.0] - 2022-12-03
Major internal refactoring. Each task is now its own class and asyncio task.
### Added
- A database of backed up events and where they are stored
- A periodic check for missed events
- This will also ensure past events before the tool was used are backed up, up until the retention period
### Fixed
- Pruning is no longer done based on file timestamps, the database is used instead. The tool will no longer delete files it didn't create.
- Pruning now runs much more frequently (every minute) so retention periods of less than a day are now possible.
## [0.7.4] - 2022-08-21
No functional changes in this version. This is just to trigger the release CI.
### Added
- Arm docker container
- rclone debugging instructions when using docker
### Fixed
- Documentation error in rclone config path of docker container.
## [0.7.3] - 2022-07-31
### Fixed
- Updated to the 4.0.0 version of pyunifiprotect
- Added rust to the container, and bumped it to alpine 3.16
## [0.7.2] - 2022-07-17
### Fixed
- Updated to the latest version of pyunifiprotect to fix issues introduced in unifi protect 2.1.1
## [0.7.1] - 2022-06-08
### Fixed
- Updated to the latest version of pyunifiprotect to fix issues introduced in unifi protect 2.0.1
- Updated documentation to include how to set up local user accounts on unifi protect
## [0.7.0] - 2022-03-26
### Added
- Added a the ability to change the way the clip files are structured via a template string.
### Fixed
- Fixed issue where event types without clips would attempt (and fail1) to download clips
- Drastically reduced the size of the docker container
- Fixed typos in the documentation
- Some dev dependencies are now not installed as default
## [0.6.0] - 2022-03-18
### Added
- Support for doorbell ring events
- `detection_types` parameter to limit which kinds of events are backed up
### Fixed
- Actually fixed timestamps this time.
## [0.5.3] - 2022-03-11
### Fixed
- Timestamps in filenames and logging now show time in the timezone of the NVR not UTC
## [0.5.2] - 2022-03-10
### Fixed
- rclone delete command now works as expected on windows when spaces are in the file path
- Dockerfile now allows setting of user and group to run as, as well as a default config
## [0.5.1] - 2022-03-07
### Fixed
- rclone command now works as expected on windows when spaces are in the file path
## [0.5.0] - 2022-03-06
### Added
- If `ffprobe` is available, the downloaded clips length is checked and logged
### Fixed
- A time delay has been added before downloading clips to try to resolve an issue where
downloaded clips were too short
## [0.4.0] - 2022-03-05
### Added
- A `--version` command line option to show the tools version
### Fixed
- Websocket checks are no longer logged in verbosity level 1 to reduce log spam
## [0.3.1] - 2022-02-24
### Fixed
- Now checks if the websocket connection is alive, and attempts to reconnect if it isn't.
## [0.3.0] - 2022-02-22
### Added
- New CLI argument for passing CLI arguments directly to `rclone`.
### Fixed
- A new camera getting added while running no longer crashes the application.
- A timeout during download now correctly retries the download instead of
abandoning the event.
## [0.2.1] - 2022-02-21
### Fixed
- Retry logging formatting

View File

@@ -55,11 +55,11 @@ Ready to contribute? Here's how to set up `unifi-protect-backup` for local devel
$ git clone git@github.com:your_name_here/unifi-protect-backup.git
```
3. Ensure [poetry](https://python-poetry.org/docs/) is installed.
4. Install dependencies and start your virtualenv:
3. Ensure [uv](https://docs.astral.sh/uv/) is installed.
4. Create virtual environment and install dependencies:
```
$ poetry install -E test -E doc -E dev
$ uv install --dev
```
5. Create a branch for local development:
@@ -70,14 +70,28 @@ Ready to contribute? Here's how to set up `unifi-protect-backup` for local devel
Now you can make your changes locally.
6. When you're done making changes, check that your changes pass the
tests, including testing other Python versions, with tox:
6. To run `unifi-protect-backup` while developing you will need to either
be inside the `poetry shell` virtualenv or run it via poetry:
```
$ uv run unifi-protect-backup {args}
```
7. Install pre-commit git hooks to ensure all code commit to the repository
is formatted correctly and meets coding standards:
```
$ uv run pre-commit install
```
8. When you're done making changes, check that your changes pass the
tests:
```
$ poetry run tox
$ uv run pytest
```
7. Commit your changes and push your branch to GitHub:
8. Commit your changes and push your branch to GitHub:
```
$ git add .
@@ -85,7 +99,7 @@ Ready to contribute? Here's how to set up `unifi-protect-backup` for local devel
$ git push origin name-of-your-bugfix-or-feature
```
8. Submit a pull request through the GitHub website.
9. Submit a pull request through the GitHub website.
## Pull Request Guidelines
@@ -93,16 +107,16 @@ Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
2. If the pull request adds functionality, the docs should be updated. Put
your new functionality into a function with a docstring, and add the
feature to the list in README.md.
3. The pull request should work for Python 3.9. Check
your new functionality into a function with a docstring. If adding a CLI
option, you should update the "usage" in README.md.
3. The pull request should work for Python 3.10. Check
https://github.com/ep1cman/unifi-protect-backup/actions
and make sure that the tests pass for all supported Python versions.
## Tips
```
$ poetry run pytest tests/test_unifi_protect_backup.py
$ uv run pytest tests/test_unifi_protect_backup.py
```
To run a subset of tests.
@@ -115,9 +129,10 @@ Make sure all your changes are committed (including an entry in CHANGELOG.md).
Then run:
```
$ poetry run bump2version patch # possible: major / minor / patch
$ uv run bump2version patch # possible: major / minor / patch
$ git push
$ git push --tags
```
GitHub Actions will then deploy to PyPI if tests pass.
GitHub Actions will then deploy to PyPI, produce a GitHub release, and a container
build if tests pass.

View File

@@ -1,24 +1,61 @@
# To build run:
# $ poetry build
# $ docker build -t ghcr.io/ep1cman/unifi-protect-backup .
FROM python:3.9-alpine
# make docker
FROM ghcr.io/linuxserver/baseimage-alpine:edge
LABEL maintainer="ep1cman"
WORKDIR /app
RUN apk add gcc musl-dev zlib-dev jpeg-dev rclone
COPY dist/unifi-protect-backup-0.2.1.tar.gz sdist.tar.gz
RUN pip install sdist.tar.gz
COPY dist/unifi_protect_backup-0.13.1.tar.gz sdist.tar.gz
# https://github.com/rust-lang/cargo/issues/2808
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true
RUN \
echo "**** install build packages ****" && \
apk add --no-cache --virtual=build-dependencies \
gcc \
musl-dev \
jpeg-dev \
zlib-dev \
python3-dev \
cargo \
git && \
echo "**** install packages ****" && \
apk add --no-cache \
rclone \
ffmpeg \
py3-pip \
python3 && \
echo "**** install unifi-protect-backup ****" && \
pip install --no-cache-dir --break-system-packages sdist.tar.gz && \
echo "**** cleanup ****" && \
apk del --purge \
build-dependencies && \
rm -rf \
/tmp/* \
/app/sdist.tar.gz
# Settings
ENV UFP_USERNAME=unifi_protect_user
ENV UFP_PASSWORD=unifi_protect_password
ENV UFP_ADDRESS=127.0.0.1
ENV UFP_PORT=443
ENV UFP_SSL_VERIFY=true
ENV RCLONE_RETENTION=7d
ENV RCLONE_DESTINATION=my_remote:/unifi_protect_backup
ENV RCLONE_DESTINATION=local:/data
ENV VERBOSITY="v"
ENV TZ=UTC
ENV IGNORE_CAMERAS=""
ENV SQLITE_PATH=/config/database/events.sqlite
VOLUME [ "/root/.config/rclone/" ]
# Fixes issue where `platformdirs` is unable to properly detect the user directory
ENV XDG_CONFIG_HOME=/config
CMD ["sh", "-c", "unifi-protect-backup -${VERBOSITY}"]
COPY docker_root/ /
RUN mkdir -p /config/database /config/rclone
VOLUME [ "/config" ]
VOLUME [ "/data" ]

403
README.md
View File

@@ -23,24 +23,86 @@ retention period.
## Features
- Listens to events in real-time via the Unifi Protect websocket API
- Ensures any previous and/or missed events within the retention period are also backed up
- Supports uploading to a [wide range of storage systems using `rclone`](https://rclone.org/overview/)
- Performs nightly pruning of old clips
- Automatic pruning of old clips
## Requirements
- Python 3.9+
- Unifi Protect version 1.20 or higher (as per [`pyunifiproect`](https://github.com/briis/pyunifiprotect))
- Python 3.10+
- Unifi Protect version 1.20 or higher (as per [`uiprotect`](https://github.com/uilibs/uiprotect))
- `rclone` installed with at least one remote configured.
# Setup
## Unifi Protect Account Setup
In order to connect to your unifi protect instance, you will first need to setup a local admin account:
* Login to your *Local Portal* on your UniFiOS device, and click on *Users*
* Open the `Roles` tab and click `Add Role` in the top right.
* Give the role a name like `unifi protect backup` and give it `Full Management` permissions for the unifi protect app.
* Now switch to the `User` tab and click `Add User` in the top right, and fill out the form. Specific Fields to pay attention to:
* Role: Must be the role created in the last step
* Account Type: *Local Access Only*
* Click *Add* in at the bottom Right.
* Select the newly created user in the list, and navigate to the `Assignments` tab in the left-hand pane, and ensure all cameras are ticked.
## Installation
*The preferred way to run this tool is using a container*
### Docker Container
You can run this tool as a container if you prefer with the following command.
Remember to change the variable to make your setup.
> **Note**
> As of version 0.8.0, the event database needs to be persisted for the tool to function properly
> please see the updated commands below
#### Backing up locally:
By default, if no rclone config is provided clips will be backed up to `/data`.
```
docker run \
-e UFP_USERNAME='USERNAME' \
-e UFP_PASSWORD='PASSWORD' \
-e UFP_ADDRESS='UNIFI_PROTECT_IP' \
-e UFP_SSL_VERIFY='false' \
-v '/path/to/save/clips':'/data' \
-v '/path/to/save/database':/config/database/ \
ghcr.io/ep1cman/unifi-protect-backup
```
#### Backing up to cloud storage:
In order to backup to cloud storage you need to provide a `rclone.conf` file.
If you do not already have a `rclone.conf` file you can create one as follows:
```
$ docker run -it --rm -v $PWD:/root/.config/rclone --entrypoint rclone ghcr.io/ep1cman/unifi-protect-backup config
```
Follow the interactive configuration process, this will create a `rclone.conf`
file in your current directory.
Finally, start the container:
```
docker run \
-e UFP_USERNAME='USERNAME' \
-e UFP_PASSWORD='PASSWORD' \
-e UFP_ADDRESS='UNIFI_PROTECT_IP' \
-e UFP_SSL_VERIFY='false' \
-e RCLONE_DESTINATION='my_remote:/unifi_protect_backup' \
-v '/path/to/rclone.conf':'/config/rclone/rclone.conf' \
-v '/path/to/save/database':/config/database/ \
ghcr.io/ep1cman/unifi-protect-backup
```
### Installing on host:
1. Install `rclone`. Instructions for your platform can be found here: https://rclone.org/install/#quickstart
2. Configure the `rclone` remote you want to backup to. Instructions can be found here: https://rclone.org/docs/#configure
3. `pip install unifi-protect-backup`
4. Optional: Install `ffprobe` so that `unifi-protect-backup` can check the length of the clips it downloads
## Usage
:warning: **Potential Data Loss**: Be very careful when setting the `rclone-destination`, at midnight every day it will
delete any files older than `retention`. It is best to give `unifi-protect-backup` its own directory.
# Usage
```
Usage: unifi-protect-backup [OPTIONS]
@@ -48,52 +110,101 @@ Usage: unifi-protect-backup [OPTIONS]
A Python based tool for backing up Unifi Protect event clips as they occur.
Options:
--address TEXT Address of Unifi Protect instance
[required]
--port INTEGER Port of Unifi Protect instance
--username TEXT Username to login to Unifi Protect instance
[required]
--version Show the version and exit.
--address TEXT Address of Unifi Protect instance [required]
--port INTEGER Port of Unifi Protect instance [default: 443]
--username TEXT Username to login to Unifi Protect instance [required]
--password TEXT Password for Unifi Protect user [required]
--verify-ssl / --no-verify-ssl Set if you do not have a valid HTTPS
Certificate for your instance
--rclone-destination TEXT `rclone` destination path in the format
{rclone remote}:{path on remote}. E.g.
`gdrive:/backups/unifi_protect` [required]
--retention TEXT How long should event clips be backed up
for. Format as per the `--max-age` argument
of `rclone`
(https://rclone.org/filtering/#max-age-don-
t-transfer-any-file-older-than-this)
--ignore-camera TEXT IDs of cameras for which events should not
be backed up. Use multiple times to ignore
multiple IDs. If being set as an environment
variable the IDs should be separated by
whitespace.
--verify-ssl / --no-verify-ssl Set if you do not have a valid HTTPS Certificate for your
instance [default: verify-ssl]
--rclone-destination TEXT `rclone` destination path in the format {rclone remote}:{path on
remote}. E.g. `gdrive:/backups/unifi_protect` [required]
--retention TEXT How long should event clips be backed up for. Format as per the
`--max-age` argument of `rclone`
(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-
older-than-this) [default: 7d]
--rclone-args TEXT Optional extra arguments to pass to `rclone rcat` directly.
Common usage for this would be to set a bandwidth limit, for
example.
--rclone-purge-args TEXT Optional extra arguments to pass to `rclone delete` directly.
Common usage for this would be to execute a permanent delete
instead of using the recycle bin on a destination. Google Drive
example: `--drive-use-trash=false`
--detection-types TEXT A comma separated list of which types of detections to backup.
Valid options are: `motion`, `person`, `vehicle`, `ring`
[default: motion,person,vehicle,ring]
--ignore-camera TEXT IDs of cameras for which events should not be backed up. Use
multiple times to ignore multiple IDs. If being set as an
environment variable the IDs should be separated by whitespace.
Alternatively, use a Unifi user with a role which has access
restricted to the subset of cameras that you wish to backup.
--camera TEXT IDs of *ONLY* cameras for which events should be backed up. Use
multiple times to include multiple IDs. If being set as an
environment variable the IDs should be separated by whitespace.
Alternatively, use a Unifi user with a role which has access
restricted to the subset of cameras that you wish to backup.
--file-structure-format TEXT A Python format string used to generate the file structure/name
on the rclone remote.For details of the fields available, see
the projects `README.md` file. [default: {camera_name}/{event.s
tart:%Y-%m-%d}/{event.end:%Y-%m-%dT%H-%M-%S}
{detection_type}.mp4]
-v, --verbose How verbose the logging output should be.
None: Only log info messages created by
`unifi-protect-backup`, and all warnings
-v: Only log info & debug messages
created by `unifi-protect-backup`, and
all warnings
-vv: Log info & debug messages created
by `unifi-protect-backup`, command
output, and all warnings
-vvv Log debug messages created by
`unifi-protect-backup`, command output,
all info messages, and all warnings
-vvvv: Log debug messages created by
`unifi-protect-backup` command output,
all info messages, all warnings, and
None: Only log info messages created by `unifi-protect-
backup`, and all warnings
-v: Only log info & debug messages created by `unifi-
protect-backup`, and all warnings
-vv: Log info & debug messages created by `unifi-protect-
backup`, command output, and all warnings
-vvv Log debug messages created by `unifi-protect-backup`,
command output, all info messages, and all warnings
-vvvv: Log debug messages created by `unifi-protect-backup`
command output, all info messages, all warnings, and
websocket data
-vvvvv: Log websocket data, command
output, all debug messages, all info
messages and all warnings [x>=0]
-vvvvv: Log websocket data, command output, all debug
messages, all info messages and all warnings [x>=0]
--sqlite_path TEXT Path to the SQLite database to use/create
--color-logging / --plain-logging
Set if you want to use color in logging output [default: plain-
logging]
--download-buffer-size TEXT How big the download buffer should be (you can use suffixes like
"B", "KiB", "MiB", "GiB") [default: 512MiB]
--purge_interval TEXT How frequently to check for file to purge.
NOTE: Can create a lot of API calls, so be careful if your cloud
provider charges you per api call [default: 1d]
--apprise-notifier TEXT Apprise URL for sending notifications.
E.g: ERROR,WARNING=tgram://[BOT KEY]/[CHAT ID]
You can use this parameter multiple times to use more than one
notification platform.
The following notification tags are available (corresponding to
the respective logging levels):
ERROR, WARNING, INFO, DEBUG, EXTRA_DEBUG, WEBSOCKET_DATA
If no tags are specified, it defaults to ERROR
More details about supported platforms can be found here:
https://github.com/caronc/apprise
--skip-missing If set, events which are 'missing' at the start will be ignored.
Subsequent missing events will be downloaded (e.g. a missed event) [default: False]
--download-rate-limit FLOAT Limit how events can be downloaded in one minute. Disabled by
default
--max-event-length INTEGER Only download events shorter than this maximum length, in
seconds [default: 7200]
--experimental-downloader If set, a new experimental download mechanism will be used to match
what the web UI does. This might be more stable if you are experiencing
a lot of failed downloads with the default downloader. [default: False]
--parallel-uploads INTEGER Max number of parallel uploads to allow [default: 1]
--storage-quota TEXT The maximum amount of storage to use for storing clips (you can
use suffixes like "B", "KiB", "MiB", "GiB")
--help Show this message and exit.
```
@@ -106,30 +217,190 @@ always take priority over environment variables):
- `UFP_SSL_VERIFY`
- `RCLONE_RETENTION`
- `RCLONE_DESTINATION`
- `RCLONE_ARGS`
- `RCLONE_PURGE_ARGS`
- `IGNORE_CAMERAS`
- `CAMERAS`
- `DETECTION_TYPES`
- `FILE_STRUCTURE_FORMAT`
- `SQLITE_PATH`
- `DOWNLOAD_BUFFER_SIZE`
- `COLOR_LOGGING`
- `PURGE_INTERVAL`
- `APPRISE_NOTIFIERS`
- `SKIP_MISSING`
- `DOWNLOAD_RATELIMIT`
- `MAX_EVENT_LENGTH`
- `EXPERIMENTAL_DOWNLOADER`
- `PARALLEL_UPLOADS`
- `STORAGE_QUOTA`
## Docker Container
You can run this tool as a container if you prefer with the following command.
Remember to change the variable to make your setup.
## File path formatting
By default, the application will save clips in the following structure on the provided rclone remote:
```
{camera_name}/{event.start:%Y-%m-%d}/{event.end:%Y-%m-%dT%H-%M-%S} {detection_type}.mp4
```
If you wish for the clips to be structured differently you can do this using the `--file-structure-format`
option. It uses standard [python format string syntax](https://docs.python.org/3/library/string.html#formatstrings).
The following fields are provided to the format string:
- *event:* The `Event` object as per https://github.com/uilibs/uiprotect/blob/main/src/uiprotect/data/nvr.py
- *duration_seconds:* The duration of the event in seconds
- *detection_type:* A nicely formatted list of the event detection type and the smart detection types (if any)
- *camera_name:* The name of the camera that generated this event
You can optionally format the `event.start`/`event.end` timestamps as per the [`strftime` format](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) by appending it after a `:` e.g to get just the date without the time: `{event.start:%Y-%m-%d}`
## Skipping initially missing events
If you prefer to avoid backing up the entire backlog of events, and would instead prefer to back up events that occur from
now on, you can use the `--skip-missing` flag. This does not enable the periodic check for missing event (e.g. one that was missed by a disconnection) but instead marks all missing events at start-up as backed up.
If you use this feature it is advised that your run the tool once with this flag, then stop it once the database has been created and the events are ignored. Keeping this flag set permanently could cause events to be missed if the tool crashes and is restarted etc.
## Selecting cameras
By default unifi-protect-backup backs up clips from all cameras.
If you want to limit the backups to certain cameras you can do that in one of two ways.
Note: Camera IDs can be obtained by scanning the logs, by looking for `Found cameras:`. You can find this section of the logs by piping the logs in to this `sed` command
`sed -n '/Found cameras:/,/NVR TZ/p'`
### Back-up only specific cameras
By using the `--camera` argument, you can specify the ID of the cameras you want to backup. If you want to backup more than one camera you can specify this argument more than once. If this argument is specified all other cameras will be ignored.
#### Example:
If you have three cameras:
- `CAMERA_ID_1`
- `CAMERA_ID_2`
- `CAMERA_ID_3`
and run the following command:
```
$ unifi-protect-backup [...] --camera CAMERA_ID_1 --camera CAMERA_ID_2
```
Only `CAMERA_ID_1` and `CAMERA_ID_2` will be backed up.
### Ignoring cameras
By using the `--ignore-camera` argument, you can specify the ID of the cameras you *do not* want to backup. If you want to ignore more than one camera you can specify this argument more than once. If this argument is specified all cameras will be backed up except the ones specified
#### Example:
If you have three cameras:
- `CAMERA_ID_1`
- `CAMERA_ID_2`
- `CAMERA_ID_3`
and run the following command:
```
$ unifi-protect-backup [...] --ignore-camera CAMERA_ID_1 --ignore-camera CAMERA_ID_2
```
Only `CAMERA_ID_3` will be backed up.
### Note about unifi protect accounts
It is possible to limit what cameras a unifi protect accounts can see. If an account does not have access to a camera this tool will never see it as available so it will not be impacted by the above arguments.
# A note about `rclone` backends and disk wear
This tool attempts to not write the downloaded files to disk to minimise disk wear, and instead streams them directly to
rclone. Sadly, not all storage backends supported by `rclone` allow "Stream Uploads". Please refer to the `StreamUpload` column on this table to see which one do and don't: https://rclone.org/overview/#optional-features
If you are using a storage medium with poor write durability e.g. an SD card on a Raspberry Pi, it is advised to avoid
such backends.
If you are running on a linux host you can setup `rclone` to use `tmpfs` (which is in RAM) to store its temp files, but this will significantly increase memory usage of the tool.
### Running Docker Container (LINUX ONLY)
Add the following arguments to your docker run command:
```
-e RCLONE_ARGS='--temp-dir=/rclone_tmp'
--tmpfs /rclone_tmp
```
### Running Directly (LINUX ONLY)
```
sudo mkdir /mnt/tmpfs
sudo mount -o size=1G -t tmpfs none /mnt/tmpfs
$ unifi-protect-backup --rclone-args "--temp-dir=/mnt/tmpfs"
```
To make this persist reboots add the following to `/etc/fstab`:
```
tmpfs /mnt/tmpfs tmpfs nosuid,nodev,noatime 0 0
```
# Running Backup Tool as a Service (LINUX ONLY)
You can create a service that will run the docker or local version of this backup tool. The service can be configured to launch on boot. This is likely the preferred way you want to execute the tool once you have it completely configured and tested so it is continuously running.
First create a service configuration file. You can replace `protectbackup` in the filename below with the name you wish to use for your service, if you change it remember to change the other locations in the following scripts as well.
```
sudo nano /lib/systemd/system/protectbackup.service
```
Next edit the content and fill in the 4 placeholders indicated by {}, replace these placeholders (including the leading `{` and trailing `}` characters) with the values you are using.
```
[Unit]
Description=Unifi Protect Backup
[Service]
User={your machine username}
Group={your machine user group, could be the same as the username}
Restart=on-abort
WorkingDirectory=/home/{your machine username}
ExecStart={put your complete docker or local command here}
[Install]
WantedBy=multi-user.target
```
Now enable the service and then start the service.
```
sudo systemctl enable protectbackup.service
sudo systemctl start protectbackup.service
```
To check the status of the service use this command.
```
sudo systemctl status protectbackup.service --no-pager
```
# Debugging
If you need to debug your rclone setup, you can invoke rclone directly like so:
```
docker run \
-e UFP_USERNAME='USERNAME' \
-e UFP_PASSWORD='PASSWORD' \
-e UFP_ADDRESS='UNIFI_PROTECT_IP' \
-e UFP_SSL_VERIFY='false' \
-e RCLONE_DESTINATION='my_remote:/unifi_protect_backup' \
-v '/path/to/rclone.conf':'/root/.config/rclone/rclone.conf' \
ghcr.io/ep1cman/unifi-protect-backup
--rm \
-v /path/to/rclone.conf:/config/rclone/rclone.conf \
-e RCLONE_CONFIG='/config/rclone/rclone.conf' \
--entrypoint rclone \
ghcr.io/ep1cman/unifi-protect-backup \
{rclone subcommand as per: https://rclone.org/docs/#subcommands}
```
If you do not already have a `rclone.conf` file you can create one as follows:
```
$ docker run -it --rm -v $PWD:/root/.config/rclone/ ghcr.io/ep1cman/unifi-protect-backup rclone config
```
This will create a `rclone.conf` file in your current directory
## Credits
For example to check that your config file is being read properly and list the configured remotes:
```
docker run \
--rm \
-v /path/to/rclone.conf:/config/rclone/rclone.conf \
-e RCLONE_CONFIG='/config/rclone/rclone.conf' \
--entrypoint rclone \
ghcr.io/ep1cman/unifi-protect-backup \
listremotes
```
- Heavily utilises [`pyunifiproect`](https://github.com/briis/pyunifiprotect) by [@briis](https://github.com/briis/)
# Credits
- All the contributors who have helped make this project:
<a href="https://github.com/ep1cman/unifi-protect-backup/graphs/contributors">
<img src="https://contrib.rocks/image?repo=ep1cman/unifi-protect-backup" />
</a>
- Heavily utilises [`uiprotect`](https://github.com/uilibs/uiprotect)
- All the cloud functionality is provided by [`rclone`](https://rclone.org/)
- This package was created with [Cookiecutter](https://github.com/audreyr/cookiecutter) and the [waynerv/cookiecutter-pypackage](https://github.com/waynerv/cookiecutter-pypackage) project template.
# Star History
[![Star History Chart](https://api.star-history.com/svg?repos=ep1cman/unifi-protect-backup&type=Date)](https://star-history.com/#ep1cman/unifi-protect-backup&Date)

View File

@@ -0,0 +1,2 @@
[local]
type = local

View File

@@ -0,0 +1,23 @@
#!/usr/bin/with-contenv bash
mkdir -p /config/rclone
# For backwards compatibility
[[ -f "/root/.config/rclone/rclone.conf" ]] && \
echo "DEPRECATED: Copying rclone conf from /root/.config/rclone/rclone.conf, please change your mount to /config/rclone/rclone.conf"
cp \
/root/.config/rclone/rclone.conf \
/config/rclone/rclone.conf
# default config file
[[ ! -f "/config/rclone/rclone.conf" ]] && \
mkdir -p /config/rclone && \
cp \
/defaults/rclone.conf \
/config/rclone/rclone.conf
chown -R abc:abc \
/config
chown -R abc:abc \
/data

View File

@@ -0,0 +1,21 @@
#!/usr/bin/with-contenv bash
export RCLONE_CONFIG=/config/rclone/rclone.conf
export XDG_CACHE_HOME=/config
echo $VERBOSITY
[[ -n "$VERBOSITY" ]] && export VERBOSITY_ARG=-$VERBOSITY || export VERBOSITY_ARG=""
# Run without exec to catch the exit code
s6-setuidgid abc unifi-protect-backup ${VERBOSITY_ARG}
exit_code=$?
# If exit code is 200 (arg error), exit the container
if [ $exit_code -eq 200 ]; then
# Send shutdown signal to s6
/run/s6/basedir/bin/halt
exit $exit_code
fi
# Otherwise, let s6 handle potential restart
exit $exit_code

View File

@@ -1,14 +1,15 @@
sources = unifi_protect_backup
container_name ?= ghcr.io/ep1cman/unifi-protect-backup
container_arches ?= linux/amd64,linux/arm64
.PHONY: test format lint unittest coverage pre-commit clean
test: format lint unittest
format:
isort $(sources) tests
black $(sources) tests
ruff format $(sources) tests
lint:
flake8 $(sources) tests
ruff check $(sources) tests
mypy $(sources) tests
unittest:
@@ -25,3 +26,7 @@ clean:
rm -rf *.egg-info
rm -rf .tox dist site
rm -rf coverage.xml .coverage
docker:
uv build
docker buildx build . --platform $(container_arches) -t $(container_name) --push

1837
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,94 +1,82 @@
[tool]
[tool.poetry]
name = "unifi-protect-backup"
version = "0.2.1"
homepage = "https://github.com/ep1cman/unifi-protect-backup"
description = "Python tool to backup unifi event clips in realtime."
authors = ["sebastian.goscik <sebastian@goscik.com>"]
readme = "README.md"
license = "MIT"
classifiers=[
'Development Status :: 2 - Pre-Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.9',
]
packages = [
{ include = "unifi_protect_backup" },
{ include = "tests", format = "sdist" },
]
[tool.poetry.dependencies]
python = ">=3.9.0,<4.0"
click = "8.0.1"
black = { version = "^21.5b2", optional = true}
isort = { version = "^5.8.0", optional = true}
flake8 = { version = "^3.9.2", optional = true}
flake8-docstrings = { version = "^1.6.0", optional = true }
mypy = {version = "^0.900", optional = true}
pytest = { version = "^6.2.4", optional = true}
pytest-cov = { version = "^2.12.0", optional = true}
tox = { version = "^3.20.1", optional = true}
virtualenv = { version = "^20.2.2", optional = true}
pip = { version = "^20.3.1", optional = true}
twine = { version = "^3.3.0", optional = true}
pre-commit = {version = "^2.12.0", optional = true}
toml = {version = "^0.10.2", optional = true}
bump2version = {version = "^1.0.1", optional = true}
tox-asdf = {version = "^0.1.0", optional = true}
pyunifiprotect = "^3.2.1"
aiocron = "^1.8"
[tool.poetry.extras]
test = [
"pytest",
"black",
"isort",
"mypy",
"flake8",
"flake8-docstrings",
"pytest-cov"
]
dev = ["tox", "pre-commit", "virtualenv", "pip", "twine", "toml", "bump2version", "tox-asdf"]
[tool.poetry.scripts]
unifi-protect-backup = 'unifi_protect_backup.cli:main'
[tool.black]
line-length = 120
skip-string-normalization = true
target-version = ['py39']
include = '\.pyi?$'
exclude = '''
/(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \.venv
| _build
| buck-out
| build
| dist
)/
'''
[tool.isort]
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
ensure_newline_before_comments = true
line_length = 120
skip_gitignore = true
# you can skip files as below
#skip_glob = docs/conf.py
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "unifi_protect_backup"
version = "0.13.1"
description = "Python tool to backup unifi event clips in realtime."
readme = "README.md"
license = {text = "MIT"}
authors = [
{name = "sebastian.goscik", email = "sebastian@goscik.com"}
]
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
]
requires-python = ">=3.10.0,<4.0"
dependencies = [
"click==8.0.1",
"aiorun>=2023.7.2",
"aiosqlite>=0.17.0",
"python-dateutil>=2.8.2",
"apprise>=1.5.0",
"expiring-dict>=1.1.0",
"async-lru>=2.0.4",
"aiolimiter>=1.1.0",
"uiprotect==7.14.1",
"aiohttp==3.11.16",
]
[project.urls]
Homepage = "https://github.com/ep1cman/unifi-protect-backup"
[project.scripts]
unifi-protect-backup = "unifi_protect_backup.cli:main"
[dependency-groups]
dev = [
"mypy>=1.15.0",
"types-pytz>=2021.3.5",
"types-cryptography>=3.3.18",
"types-python-dateutil>=2.8.19.10",
"types-aiofiles>=24.1.0.20241221",
"bump2version>=1.0.1",
"pre-commit>=4.2.0",
"ruff>=0.11.4",
"pytest>=8.3.5",
]
[tool.hatch.build.targets.wheel]
packages = ["unifi_protect_backup"]
[tool.hatch.build.targets.sdist]
include = ["unifi_protect_backup", "tests"]
[tool.ruff]
line-length = 120
target-version = "py310"
[tool.ruff.lint]
select = ["E","F","D","B","W"]
ignore = ["D203", "D213"]
[tool.ruff.format]
quote-style = "double"
indent-style = "space"
line-ending = "lf"
docstring-code-format = true
[tool.mypy]
allow_redefinition = true
exclude = [
'unifi_protect_backup/uiprotect_patch.py'
]
[tool.uv]
default-groups = []

View File

@@ -1,88 +0,0 @@
[flake8]
max-line-length = 120
max-complexity = 18
ignore = E203, E266, W503
docstring-convention = google
per-file-ignores = __init__.py:F401
exclude = .git,
__pycache__,
setup.py,
build,
dist,
docs,
releases,
.venv,
.tox,
.mypy_cache,
.pytest_cache,
.vscode,
.github,
# By default test codes will be linted.
# tests
[mypy]
ignore_missing_imports = True
[coverage:run]
# uncomment the following to omit files during running
#omit =
[coverage:report]
exclude_lines =
pragma: no cover
def __repr__
if self.debug:
if settings.DEBUG
raise AssertionError
raise NotImplementedError
if 0:
if __name__ == .__main__.:
def main
[tox:tox]
isolated_build = true
envlist = py39, format, lint, build
[gh-actions]
python =
3.9: py39, format, lint, build
[testenv]
allowlist_externals = pytest
extras =
test
passenv = *
setenv =
PYTHONPATH = {toxinidir}
PYTHONWARNINGS = ignore
commands =
pytest --cov=unifi_protect_backup --cov-branch --cov-report=xml --cov-report=term-missing tests
[testenv:format]
allowlist_externals =
isort
black
extras =
test
commands =
isort unifi_protect_backup
black unifi_protect_backup tests
[testenv:lint]
allowlist_externals =
flake8
mypy
extras =
test
commands =
flake8 unifi_protect_backup tests
mypy unifi_protect_backup tests
[testenv:build]
allowlist_externals =
poetry
twine
extras =
dev
commands =
poetry build
twine check dist/*

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env python
"""Tests for `unifi_protect_backup` package."""
import pytest
import pytest # type: ignore
# from click.testing import CliRunner

View File

@@ -1,7 +1,22 @@
"""Top-level package for Unifi Protect Backup."""
__author__ = """sebastian.goscik"""
__email__ = 'sebastian@goscik.com'
__version__ = '0.2.1'
__email__ = "sebastian@goscik.com"
__version__ = "0.13.1"
from .unifi_protect_backup import UnifiProtectBackup
from .downloader import VideoDownloader
from .downloader_experimental import VideoDownloaderExperimental
from .event_listener import EventListener
from .purge import Purge, StorageQuotaPurge
from .uploader import VideoUploader
from .missing_event_checker import MissingEventChecker
__all__ = [
"VideoDownloader",
"VideoDownloaderExperimental",
"EventListener",
"Purge",
"StorageQuotaPurge",
"VideoUploader",
"MissingEventChecker",
]

View File

@@ -1,48 +1,139 @@
"""Console script for unifi_protect_backup."""
import asyncio
import sys
import re
import click
from aiorun import run # type: ignore
from dateutil.relativedelta import relativedelta
from unifi_protect_backup import UnifiProtectBackup
from uiprotect.data.types import SmartDetectObjectType, SmartDetectAudioType
from unifi_protect_backup import __version__
from unifi_protect_backup.unifi_protect_backup_core import UnifiProtectBackup
from unifi_protect_backup.utils import human_readable_to_float
DETECTION_TYPES = ["motion", "ring", "line", "fingerprint", "nfc"]
DETECTION_TYPES += [t for t in SmartDetectObjectType.values() if t not in SmartDetectAudioType.values()]
DETECTION_TYPES += [f"{t}" for t in SmartDetectAudioType.values()]
@click.command()
@click.option('--address', required=True, envvar='UFP_ADDRESS', help='Address of Unifi Protect instance')
@click.option('--port', default=443, envvar='UFP_PORT', help='Port of Unifi Protect instance')
@click.option('--username', required=True, envvar='UFP_USERNAME', help='Username to login to Unifi Protect instance')
@click.option('--password', required=True, envvar='UFP_PASSWORD', help='Password for Unifi Protect user')
def _parse_detection_types(ctx, param, value):
# split columns by ',' and remove whitespace
types = [t.strip() for t in value.split(",")]
# validate passed columns
for t in types:
if t not in DETECTION_TYPES:
raise click.BadOptionUsage("detection-types", f"`{t}` is not an available detection type.", ctx)
return types
def parse_rclone_retention(ctx, param, retention) -> relativedelta:
"""Parse the rclone `retention` parameter into a relativedelta which can then be used to calculate datetimes."""
matches = {k: int(v) for v, k in re.findall(r"([\d]+)(ms|s|m|h|d|w|M|y)", retention)}
# Check that we matched the whole string
if len(retention) != len("".join([f"{v}{k}" for k, v in matches.items()])):
raise click.BadParameter("See here for expected format: https://rclone.org/docs/#time-option")
return relativedelta(
microseconds=matches.get("ms", 0) * 1000,
seconds=matches.get("s", 0),
minutes=matches.get("m", 0),
hours=matches.get("h", 0),
days=matches.get("d", 0),
weeks=matches.get("w", 0),
months=matches.get("M", 0),
years=matches.get("y", 0),
)
@click.command(context_settings=dict(max_content_width=100))
@click.version_option(__version__)
@click.option("--address", required=True, envvar="UFP_ADDRESS", help="Address of Unifi Protect instance")
@click.option("--port", default=443, envvar="UFP_PORT", show_default=True, help="Port of Unifi Protect instance")
@click.option("--username", required=True, envvar="UFP_USERNAME", help="Username to login to Unifi Protect instance")
@click.option("--password", required=True, envvar="UFP_PASSWORD", help="Password for Unifi Protect user")
@click.option(
'--verify-ssl/--no-verify-ssl',
"--verify-ssl/--no-verify-ssl",
default=True,
envvar='UFP_SSL_VERIFY',
show_default=True,
envvar="UFP_SSL_VERIFY",
help="Set if you do not have a valid HTTPS Certificate for your instance",
)
@click.option(
'--rclone-destination',
"--rclone-destination",
required=True,
envvar='RCLONE_DESTINATION',
envvar="RCLONE_DESTINATION",
help="`rclone` destination path in the format {rclone remote}:{path on remote}."
" E.g. `gdrive:/backups/unifi_protect`",
)
@click.option(
'--retention',
default='7d',
envvar='RCLONE_RETENTION',
help="How long should event clips be backed up for. Format as per the `--max-age` argument of "
"`rclone` (https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)",
"--retention",
default="7d",
show_default=True,
envvar="RCLONE_RETENTION",
help="How long should event clips be backed up for. Format as per the `--max-age` argument of `rclone` "
"(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)",
callback=parse_rclone_retention,
)
@click.option(
'--ignore-camera',
'ignore_cameras',
"--rclone-args",
default="",
envvar="RCLONE_ARGS",
help="Optional extra arguments to pass to `rclone rcat` directly. Common usage for this would "
"be to set a bandwidth limit, for example.",
)
@click.option(
"--rclone-purge-args",
default="",
envvar="RCLONE_PURGE_ARGS",
help="Optional extra arguments to pass to `rclone delete` directly. Common usage for this would "
"be to execute a permanent delete instead of using the recycle bin on a destination. "
"Google Drive example: `--drive-use-trash=false`",
)
@click.option(
"--detection-types",
envvar="DETECTION_TYPES",
default=",".join(DETECTION_TYPES),
show_default=True,
help="A comma separated list of which types of detections to backup. "
f"Valid options are: {', '.join([f'`{t}`' for t in DETECTION_TYPES])}",
callback=_parse_detection_types,
)
@click.option(
"--ignore-camera",
"ignore_cameras",
multiple=True,
envvar="IGNORE_CAMERAS",
help="IDs of cameras for which events should not be backed up. Use multiple times to ignore "
"multiple IDs. If being set as an environment variable the IDs should be separated by whitespace.",
"multiple IDs. If being set as an environment variable the IDs should be separated by whitespace. "
"Alternatively, use a Unifi user with a role which has access restricted to the subset of cameras "
"that you wish to backup.",
)
@click.option(
'-v',
'--verbose',
"--camera",
"cameras",
multiple=True,
envvar="CAMERAS",
help="IDs of *ONLY* cameras for which events should be backed up. Use multiple times to include "
"multiple IDs. If being set as an environment variable the IDs should be separated by whitespace. "
"Alternatively, use a Unifi user with a role which has access restricted to the subset of cameras "
"that you wish to backup.",
)
@click.option(
"--file-structure-format",
envvar="FILE_STRUCTURE_FORMAT",
default="{camera_name}/{event.start:%Y-%m-%d}/{event.end:%Y-%m-%dT%H-%M-%S} {detection_type}.mp4",
show_default=True,
help="A Python format string used to generate the file structure/name on the rclone remote."
"For details of the fields available, see the projects `README.md` file.",
)
@click.option(
"-v",
"--verbose",
count=True,
help="How verbose the logging output should be."
"""
@@ -61,11 +152,129 @@ all warnings, and websocket data
-vvvvv: Log websocket data, command output, all debug messages, all info messages and all warnings
""",
)
@click.option(
"--sqlite_path",
default="events.sqlite",
envvar="SQLITE_PATH",
help="Path to the SQLite database to use/create",
)
@click.option(
"--color-logging/--plain-logging",
default=False,
show_default=True,
envvar="COLOR_LOGGING",
help="Set if you want to use color in logging output",
)
@click.option(
"--download-buffer-size",
default="512MiB",
show_default=True,
envvar="DOWNLOAD_BUFFER_SIZE",
help='How big the download buffer should be (you can use suffixes like "B", "KiB", "MiB", "GiB")',
callback=lambda ctx, param, value: int(human_readable_to_float(value)),
)
@click.option(
"--purge_interval",
default="1d",
show_default=True,
envvar="PURGE_INTERVAL",
help="How frequently to check for file to purge.\n\nNOTE: Can create a lot of API calls, so be careful if "
"your cloud provider charges you per api call",
callback=parse_rclone_retention,
)
@click.option(
"--apprise-notifier",
"apprise_notifiers",
multiple=True,
envvar="APPRISE_NOTIFIERS",
help="""\b
Apprise URL for sending notifications.
E.g: ERROR,WARNING=tgram://[BOT KEY]/[CHAT ID]
You can use this parameter multiple times to use more than one notification platform.
The following notification tags are available (corresponding to the respective logging levels):
ERROR, WARNING, INFO, DEBUG, EXTRA_DEBUG, WEBSOCKET_DATA
If no tags are specified, it defaults to ERROR
More details about supported platforms can be found here: https://github.com/caronc/apprise""",
)
@click.option(
"--skip-missing",
default=False,
show_default=True,
is_flag=True,
envvar="SKIP_MISSING",
help="""\b
If set, events which are 'missing' at the start will be ignored.
Subsequent missing events will be downloaded (e.g. a missed event)
""",
)
@click.option(
"--download-rate-limit",
default=None,
show_default=True,
envvar="DOWNLOAD_RATELIMIT",
type=float,
help="Limit how events can be downloaded in one minute. Disabled by default",
)
@click.option(
"--max-event-length",
default=2 * 60 * 60,
show_default=True,
envvar="MAX_EVENT_LENGTH",
type=int,
help="Only download events shorter than this maximum length, in seconds",
)
@click.option(
"--experimental-downloader",
"use_experimental_downloader",
default=False,
show_default=True,
is_flag=True,
envvar="EXPERIMENTAL_DOWNLOADER",
help="""\b
If set, a new experimental download mechanism will be used to match
what the web UI does. This might be more stable if you are experiencing
a lot of failed downloads with the default downloader.
""",
)
@click.option(
"--parallel-uploads",
default=1,
show_default=True,
envvar="PARALLEL_UPLOADS",
type=int,
help="Max number of parallel uploads to allow",
)
@click.option(
"--storage-quota",
envvar="STORAGE_QUOTA",
help='The maximum amount of storage to use for storing clips (you can use suffixes like "B", "KiB", "MiB", "GiB")',
callback=lambda ctx, param, value: int(human_readable_to_float(value)) if value is not None else None,
)
def main(**kwargs):
"""A Python based tool for backing up Unifi Protect event clips as they occur."""
loop = asyncio.get_event_loop()
event_listener = UnifiProtectBackup(**kwargs)
loop.run_until_complete(event_listener.start())
"""Python based tool for backing up Unifi Protect event clips as they occur."""
try:
# Validate only one of the camera select arguments was given
if kwargs.get("cameras") and kwargs.get("ignore_cameras"):
click.echo(
"Error: --camera and --ignore-camera options are mutually exclusive. "
"Please use only one of these options.",
err=True,
)
raise SystemExit(200) # throw 200 = arg error, service will not be restarted (docker)
# Only create the event listener and run if validation passes
event_listener = UnifiProtectBackup(**kwargs)
run(event_listener.start(), stop_on_unhandled_errors=True)
except SystemExit as e:
sys.exit(e.code)
except Exception as e:
click.echo(f"Error: {str(e)}", err=True)
sys.exit(1)
if __name__ == "__main__":

View File

@@ -0,0 +1,228 @@
# noqa: D100
import asyncio
import json
import logging
import shutil
from datetime import datetime, timedelta, timezone
from typing import Optional
import aiosqlite
import pytz
from aiohttp.client_exceptions import ClientPayloadError
from aiolimiter import AsyncLimiter
from expiring_dict import ExpiringDict # type: ignore
from uiprotect import ProtectApiClient
from uiprotect.data.nvr import Event
from uiprotect.data.types import EventType
from unifi_protect_backup.utils import (
SubprocessException,
VideoQueue,
get_camera_name,
human_readable_size,
run_command,
setup_event_logger,
)
async def get_video_length(video: bytes) -> float:
"""Use ffprobe to get the length of the video file passed in as a byte stream."""
returncode, stdout, stderr = await run_command(
"ffprobe -v quiet -show_streams -select_streams v:0 -of json -", video
)
if returncode != 0:
raise SubprocessException(stdout, stderr, returncode)
json_data = json.loads(stdout)
return float(json_data["streams"][0]["duration"])
class VideoDownloader:
"""Downloads event video clips from Unifi Protect."""
def __init__(
self,
protect: ProtectApiClient,
db: aiosqlite.Connection,
download_queue: asyncio.Queue,
upload_queue: VideoQueue,
color_logging: bool,
download_rate_limit: float,
max_event_length: timedelta,
):
"""Init.
Args:
protect (ProtectApiClient): UniFi Protect API client to use
db (aiosqlite.Connection): Async SQLite database to check for missing events
download_queue (asyncio.Queue): Queue to get event details from
upload_queue (VideoQueue): Queue to place downloaded videos on
color_logging (bool): Whether or not to add color to logging output
download_rate_limit (float): Limit how events can be downloaded in one minute",
max_event_length (timedelta): Maximum length in seconds for an event to be considered valid and downloaded
"""
self._protect: ProtectApiClient = protect
self._db: aiosqlite.Connection = db
self.download_queue: asyncio.Queue = download_queue
self.upload_queue: VideoQueue = upload_queue
self.current_event = None
self._failures = ExpiringDict(60 * 60 * 12) # Time to live = 12h
self._download_rate_limit = download_rate_limit
self._max_event_length = max_event_length
self._limiter = AsyncLimiter(self._download_rate_limit) if self._download_rate_limit is not None else None
self.base_logger = logging.getLogger(__name__)
setup_event_logger(self.base_logger, color_logging)
self.logger = logging.LoggerAdapter(self.base_logger, {"event": ""})
# Check if `ffprobe` is available
ffprobe = shutil.which("ffprobe")
if ffprobe is not None:
self.logger.debug(f"ffprobe found: {ffprobe}")
self._has_ffprobe = True
else:
self._has_ffprobe = False
async def start(self):
"""Run main loop."""
self.logger.info("Starting Downloader")
while True:
if self._limiter:
self.logger.debug("Waiting for rate limit")
await self._limiter.acquire()
try:
# Wait for unifi protect to be connected
await self._protect.connect_event.wait()
event = await self.download_queue.get()
self.current_event = event
self.logger = logging.LoggerAdapter(self.base_logger, {"event": f" [{event.id}]"})
# Fix timezones since uiprotect sets all timestamps to UTC. Instead localize them to
# the timezone of the unifi protect NVR.
event.start = event.start.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
event.end = event.end.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
self.logger.info(f"Downloading event: {event.id}")
self.logger.debug(f"Remaining Download Queue: {self.download_queue.qsize()}")
output_queue_current_size = human_readable_size(self.upload_queue.qsize())
output_queue_max_size = human_readable_size(self.upload_queue.maxsize)
self.logger.debug(f"Video Download Buffer: {output_queue_current_size}/{output_queue_max_size}")
self.logger.debug(f" Camera: {await get_camera_name(self._protect, event.camera_id)}")
if event.type in [EventType.SMART_DETECT, EventType.SMART_AUDIO_DETECT]:
self.logger.debug(f" Type: {event.type.value} ({', '.join(event.smart_detect_types)})")
else:
self.logger.debug(f" Type: {event.type.value}")
self.logger.debug(f" Start: {event.start.strftime('%Y-%m-%dT%H-%M-%S')} ({event.start.timestamp()})")
self.logger.debug(f" End: {event.end.strftime('%Y-%m-%dT%H-%M-%S')} ({event.end.timestamp()})")
duration = (event.end - event.start).total_seconds()
self.logger.debug(f" Duration: {duration}s")
# Skip invalid events
if not self._valid_event(event):
await self._ignore_event(event)
continue
# Unifi protect does not return full video clips if the clip is requested too soon.
# There are two issues at play here:
# - Protect will only cut a clip on an keyframe which happen every 5s
# - Protect's pipeline needs a finite amount of time to make a clip available
# So we will wait 1.5x the keyframe interval to ensure that there is always ample video
# stored and Protect can return a full clip (which should be at least the length requested,
# but often longer)
time_since_event_ended = datetime.utcnow().replace(tzinfo=timezone.utc) - event.end
sleep_time = (timedelta(seconds=5 * 1.5) - time_since_event_ended).total_seconds()
if sleep_time > 0:
self.logger.debug(f" Sleeping ({sleep_time}s) to ensure clip is ready to download...")
await asyncio.sleep(sleep_time)
try:
video = await self._download(event)
assert video is not None
except Exception as e:
# Increment failure count
if event.id not in self._failures:
self._failures[event.id] = 1
else:
self._failures[event.id] += 1
self.logger.warning(f"Event failed download attempt {self._failures[event.id]}", exc_info=e)
if self._failures[event.id] >= 10:
self.logger.error(
"Event has failed to download 10 times in a row. Permanently ignoring this event"
)
await self._ignore_event(event)
continue
# Remove successfully downloaded event from failures list
if event.id in self._failures:
del self._failures[event.id]
# Get the actual length of the downloaded video using ffprobe
if self._has_ffprobe:
await self._check_video_length(video, duration)
await self.upload_queue.put((event, video))
self.logger.debug("Added to upload queue")
self.current_event = None
except Exception as e:
self.logger.error(f"Unexpected exception occurred, abandoning event {event.id}:", exc_info=e)
async def _download(self, event: Event) -> Optional[bytes]:
"""Download the video clip for the given event."""
self.logger.debug(" Downloading video...")
for x in range(5):
assert isinstance(event.camera_id, str)
assert isinstance(event.start, datetime)
assert isinstance(event.end, datetime)
try:
video = await self._protect.get_camera_video(event.camera_id, event.start, event.end)
assert isinstance(video, bytes)
break
except (AssertionError, ClientPayloadError, TimeoutError) as e:
self.logger.warning(f" Failed download attempt {x + 1}, retying in 1s", exc_info=e)
await asyncio.sleep(1)
else:
self.logger.error(f"Download failed after 5 attempts, abandoning event {event.id}:")
return None
self.logger.debug(f" Downloaded video size: {human_readable_size(len(video))}s")
return video
async def _ignore_event(self, event):
self.logger.warning("Ignoring event")
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
await self._db.commit()
async def _check_video_length(self, video, duration):
"""Check if the downloaded event is at least the length of the event, warn otherwise.
It is expected for events to regularly be slightly longer than the event specified
"""
try:
downloaded_duration = await get_video_length(video)
msg = f" Downloaded video length: {downloaded_duration:.3f}s ({downloaded_duration - duration:+.3f}s)"
if downloaded_duration < duration:
self.logger.warning(msg)
else:
self.logger.debug(msg)
except SubprocessException as e:
self.logger.warning(" `ffprobe` failed", exc_info=e)
def _valid_event(self, event):
duration = event.end - event.start
if duration > self._max_event_length:
self.logger.warning(f"Event longer ({duration}) than max allowed length {self._max_event_length}")
return False
return True

View File

@@ -0,0 +1,239 @@
# noqa: D100
import asyncio
import json
import logging
import shutil
from datetime import datetime, timedelta, timezone
from typing import Optional
import aiosqlite
import pytz
from aiohttp.client_exceptions import ClientPayloadError
from aiolimiter import AsyncLimiter
from expiring_dict import ExpiringDict # type: ignore
from uiprotect import ProtectApiClient
from uiprotect.data.nvr import Event
from uiprotect.data.types import EventType
from unifi_protect_backup.utils import (
SubprocessException,
VideoQueue,
get_camera_name,
human_readable_size,
run_command,
setup_event_logger,
)
async def get_video_length(video: bytes) -> float:
"""Use ffprobe to get the length of the video file passed in as a byte stream."""
returncode, stdout, stderr = await run_command(
"ffprobe -v quiet -show_streams -select_streams v:0 -of json -", video
)
if returncode != 0:
raise SubprocessException(stdout, stderr, returncode)
json_data = json.loads(stdout)
return float(json_data["streams"][0]["duration"])
class VideoDownloaderExperimental:
"""Downloads event video clips from Unifi Protect."""
def __init__(
self,
protect: ProtectApiClient,
db: aiosqlite.Connection,
download_queue: asyncio.Queue,
upload_queue: VideoQueue,
color_logging: bool,
download_rate_limit: float,
max_event_length: timedelta,
):
"""Init.
Args:
protect (ProtectApiClient): UniFi Protect API client to use
db (aiosqlite.Connection): Async SQLite database to check for missing events
download_queue (asyncio.Queue): Queue to get event details from
upload_queue (VideoQueue): Queue to place downloaded videos on
color_logging (bool): Whether or not to add color to logging output
download_rate_limit (float): Limit how events can be downloaded in one minute",
max_event_length (timedelta): Maximum length in seconds for an event to be considered valid and downloaded
"""
self._protect: ProtectApiClient = protect
self._db: aiosqlite.Connection = db
self.download_queue: asyncio.Queue = download_queue
self.upload_queue: VideoQueue = upload_queue
self.current_event = None
self._failures = ExpiringDict(60 * 60 * 12) # Time to live = 12h
self._download_rate_limit = download_rate_limit
self._max_event_length = max_event_length
self._limiter = AsyncLimiter(self._download_rate_limit) if self._download_rate_limit is not None else None
self.base_logger = logging.getLogger(__name__)
setup_event_logger(self.base_logger, color_logging)
self.logger = logging.LoggerAdapter(self.base_logger, {"event": ""})
# Check if `ffprobe` is available
ffprobe = shutil.which("ffprobe")
if ffprobe is not None:
self.logger.debug(f"ffprobe found: {ffprobe}")
self._has_ffprobe = True
else:
self._has_ffprobe = False
async def start(self):
"""Run main loop."""
self.logger.info("Starting Downloader")
while True:
if self._limiter:
self.logger.debug("Waiting for rate limit")
await self._limiter.acquire()
try:
# Wait for unifi protect to be connected
await self._protect.connect_event.wait()
event = await self.download_queue.get()
self.current_event = event
self.logger = logging.LoggerAdapter(self.base_logger, {"event": f" [{event.id}]"})
# Fix timezones since uiprotect sets all timestamps to UTC. Instead localize them to
# the timezone of the unifi protect NVR.
event.start = event.start.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
event.end = event.end.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
self.logger.info(f"Downloading event: {event.id}")
self.logger.debug(f"Remaining Download Queue: {self.download_queue.qsize()}")
output_queue_current_size = human_readable_size(self.upload_queue.qsize())
output_queue_max_size = human_readable_size(self.upload_queue.maxsize)
self.logger.debug(f"Video Download Buffer: {output_queue_current_size}/{output_queue_max_size}")
self.logger.debug(f" Camera: {await get_camera_name(self._protect, event.camera_id)}")
if event.type in [EventType.SMART_DETECT, EventType.SMART_AUDIO_DETECT]:
self.logger.debug(f" Type: {event.type.value} ({', '.join(event.smart_detect_types)})")
else:
self.logger.debug(f" Type: {event.type.value}")
self.logger.debug(f" Start: {event.start.strftime('%Y-%m-%dT%H-%M-%S')} ({event.start.timestamp()})")
self.logger.debug(f" End: {event.end.strftime('%Y-%m-%dT%H-%M-%S')} ({event.end.timestamp()})")
duration = (event.end - event.start).total_seconds()
self.logger.debug(f" Duration: {duration}s")
# Skip invalid events
if not self._valid_event(event):
await self._ignore_event(event)
continue
# Unifi protect does not return full video clips if the clip is requested too soon.
# There are two issues at play here:
# - Protect will only cut a clip on an keyframe which happen every 5s
# - Protect's pipeline needs a finite amount of time to make a clip available
# So we will wait 1.5x the keyframe interval to ensure that there is always ample video
# stored and Protect can return a full clip (which should be at least the length requested,
# but often longer)
time_since_event_ended = datetime.utcnow().replace(tzinfo=timezone.utc) - event.end
sleep_time = (timedelta(seconds=5 * 1.5) - time_since_event_ended).total_seconds()
if sleep_time > 0:
self.logger.debug(f" Sleeping ({sleep_time}s) to ensure clip is ready to download...")
await asyncio.sleep(sleep_time)
try:
video = await self._download(event)
assert video is not None
except Exception as e:
# Increment failure count
if event.id not in self._failures:
self._failures[event.id] = 1
else:
self._failures[event.id] += 1
self.logger.warning(
f"Event failed download attempt {self._failures[event.id]}",
exc_info=e,
)
if self._failures[event.id] >= 10:
self.logger.error(
"Event has failed to download 10 times in a row. Permanently ignoring this event"
)
await self._ignore_event(event)
continue
# Remove successfully downloaded event from failures list
if event.id in self._failures:
del self._failures[event.id]
# Get the actual length of the downloaded video using ffprobe
if self._has_ffprobe:
await self._check_video_length(video, duration)
await self.upload_queue.put((event, video))
self.logger.debug("Added to upload queue")
self.current_event = None
except Exception as e:
self.logger.error(
f"Unexpected exception occurred, abandoning event {event.id}:",
exc_info=e,
)
async def _download(self, event: Event) -> Optional[bytes]:
"""Download the video clip for the given event."""
self.logger.debug(" Downloading video...")
for x in range(5):
assert isinstance(event.camera_id, str)
assert isinstance(event.start, datetime)
assert isinstance(event.end, datetime)
try:
prepared_video_file = await self._protect.prepare_camera_video( # type: ignore
event.camera_id, event.start, event.end
)
video = await self._protect.download_camera_video( # type: ignore
event.camera_id, prepared_video_file["fileName"]
)
assert isinstance(video, bytes)
break
except (AssertionError, ClientPayloadError, TimeoutError) as e:
self.logger.warning(f" Failed download attempt {x + 1}, retying in 1s", exc_info=e)
await asyncio.sleep(1)
else:
self.logger.error(f"Download failed after 5 attempts, abandoning event {event.id}:")
return None
self.logger.debug(f" Downloaded video size: {human_readable_size(len(video))}s")
return video
async def _ignore_event(self, event):
self.logger.warning("Ignoring event")
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
await self._db.commit()
async def _check_video_length(self, video, duration):
"""Check if the downloaded event is at least the length of the event, warn otherwise.
It is expected for events to regularly be slightly longer than the event specified
"""
try:
downloaded_duration = await get_video_length(video)
msg = f" Downloaded video length: {downloaded_duration:.3f}s ({downloaded_duration - duration:+.3f}s)"
if downloaded_duration < duration:
self.logger.warning(msg)
else:
self.logger.debug(msg)
except SubprocessException as e:
self.logger.warning(" `ffprobe` failed", exc_info=e)
def _valid_event(self, event):
duration = event.end - event.start
if duration > self._max_event_length:
self.logger.warning(f"Event longer ({duration}) than max allowed length {self._max_event_length}")
return False
return True

View File

@@ -0,0 +1,99 @@
# noqa: D100
import asyncio
import logging
from time import sleep
from typing import Set
from uiprotect.api import ProtectApiClient
from uiprotect.websocket import WebsocketState
from uiprotect.data.nvr import Event
from uiprotect.data.websocket import WSAction, WSSubscriptionMessage
from unifi_protect_backup.utils import wanted_event_type
logger = logging.getLogger(__name__)
class EventListener:
"""Listens to the unifi protect websocket for new events to backup."""
def __init__(
self,
event_queue: asyncio.Queue,
protect: ProtectApiClient,
detection_types: Set[str],
ignore_cameras: Set[str],
cameras: Set[str],
):
"""Init.
Args:
event_queue (asyncio.Queue): Queue to place events to backup on
protect (ProtectApiClient): UniFI Protect API client to use
detection_types (Set[str]): Desired Event detection types to look for
ignore_cameras (Set[str]): Cameras IDs to ignore events from
cameras (Set[str]): Cameras IDs to ONLY include events from
"""
self._event_queue: asyncio.Queue = event_queue
self._protect: ProtectApiClient = protect
self._unsub = None
self._unsub_websocketstate = None
self.detection_types: Set[str] = detection_types
self.ignore_cameras: Set[str] = ignore_cameras
self.cameras: Set[str] = cameras
async def start(self):
"""Run main Loop."""
logger.debug("Subscribed to websocket")
self._unsub_websocket_state = self._protect.subscribe_websocket_state(self._websocket_state_callback)
self._unsub = self._protect.subscribe_websocket(self._websocket_callback)
def _websocket_callback(self, msg: WSSubscriptionMessage) -> None:
"""'EVENT' websocket message callback.
Filters the incoming events, and puts completed events onto the download queue
Args:
msg (Event): Incoming event data
"""
logger.websocket_data(msg) # type: ignore
assert isinstance(msg.new_obj, Event)
if msg.action != WSAction.UPDATE:
return
if "end" not in msg.changed_data:
return
if not wanted_event_type(msg.new_obj, self.detection_types, self.cameras, self.ignore_cameras):
return
# TODO: Will this even work? I think it will block the async loop
while self._event_queue.full():
logger.extra_debug("Event queue full, waiting 1s...") # type: ignore
sleep(1)
self._event_queue.put_nowait(msg.new_obj)
# Unifi protect has started sending the event id in the websocket as a {event_id}-{camera_id} but when the
# API is queried they only have {event_id}. Keeping track of these both of these would be complicated so
# instead we fudge the ID here to match what the API returns
if "-" in msg.new_obj.id:
msg.new_obj.id = msg.new_obj.id.split("-")[0]
logger.debug(f"Adding event {msg.new_obj.id} to queue (Current download queue={self._event_queue.qsize()})")
def _websocket_state_callback(self, state: WebsocketState) -> None:
"""Websocket state message callback.
Flags the websocket for reconnection
Args:
state (WebsocketState): new state of the websocket
"""
if state == WebsocketState.DISCONNECTED:
logger.error("Unifi Protect Websocket lost connection. Reconnecting...")
elif state == WebsocketState.CONNECTED:
logger.info("Unifi Protect Websocket connection restored")

View File

@@ -0,0 +1,178 @@
# noqa: D100
import asyncio
import logging
from datetime import datetime
from typing import AsyncIterator, List, Set
import aiosqlite
from dateutil.relativedelta import relativedelta
from uiprotect import ProtectApiClient
from uiprotect.data.nvr import Event
from uiprotect.data.types import EventType
from unifi_protect_backup import VideoDownloader, VideoUploader
from unifi_protect_backup.utils import EVENT_TYPES_MAP, wanted_event_type
logger = logging.getLogger(__name__)
class MissingEventChecker:
"""Periodically checks if any unifi protect events exist within the retention period that are not backed up."""
def __init__(
self,
protect: ProtectApiClient,
db: aiosqlite.Connection,
download_queue: asyncio.Queue,
downloader: VideoDownloader,
uploaders: List[VideoUploader],
retention: relativedelta,
detection_types: Set[str],
ignore_cameras: Set[str],
cameras: Set[str],
interval: int = 60 * 5,
) -> None:
"""Init.
Args:
protect (ProtectApiClient): UniFi Protect API client to use
db (aiosqlite.Connection): Async SQLite database to check for missing events
download_queue (asyncio.Queue): Download queue to check for on-going downloads
downloader (VideoDownloader): Downloader to check for on-going downloads
uploaders (List[VideoUploader]): Uploaders to check for on-going uploads
retention (relativedelta): Retention period to limit search window
detection_types (Set[str]): Detection types wanted to limit search
ignore_cameras (Set[str]): Ignored camera IDs to limit search
cameras (Set[str]): Included (ONLY) camera IDs to limit search
interval (int): How frequently, in seconds, to check for missing events,
"""
self._protect: ProtectApiClient = protect
self._db: aiosqlite.Connection = db
self._download_queue: asyncio.Queue = download_queue
self._downloader: VideoDownloader = downloader
self._uploaders: List[VideoUploader] = uploaders
self.retention: relativedelta = retention
self.detection_types: Set[str] = detection_types
self.ignore_cameras: Set[str] = ignore_cameras
self.cameras: Set[str] = cameras
self.interval: int = interval
async def _get_missing_events(self) -> AsyncIterator[Event]:
start_time = datetime.now() - self.retention
end_time = datetime.now()
chunk_size = 500
while True:
# Get list of events that need to be backed up from unifi protect
logger.extra_debug(f"Fetching events for interval: {start_time} - {end_time}") # type: ignore
events_chunk = await self._protect.get_events(
start=start_time,
end=end_time,
types=list(EVENT_TYPES_MAP.keys()),
limit=chunk_size,
)
if not events_chunk:
break # There were no events to backup
# Filter out on-going events
unifi_events = {event.id: event for event in events_chunk if event.end is not None}
if not unifi_events:
break # No completed events to process
# Next chunks start time should be the end of the oldest complete event in the current chunk
start_time = max([event.end for event in unifi_events.values() if event.end is not None])
# Get list of events that have been backed up from the database
# events(id, type, camera_id, start, end)
async with self._db.execute("SELECT * FROM events") as cursor:
rows = await cursor.fetchall()
db_event_ids = {row[0] for row in rows}
# Prevent re-adding events currently in the download/upload queue
downloading_event_ids = {event.id for event in self._downloader.download_queue._queue} # type: ignore
current_download = self._downloader.current_event
if current_download is not None:
downloading_event_ids.add(current_download.id)
uploading_event_ids = {event.id for event, video in self._downloader.upload_queue._queue} # type: ignore
for uploader in self._uploaders:
current_upload = uploader.current_event
if current_upload is not None:
uploading_event_ids.add(current_upload.id)
missing_events = {
event_id: event
for event_id, event in unifi_events.items()
if event_id not in (db_event_ids | downloading_event_ids | uploading_event_ids)
}
# Exclude events of unwanted types
wanted_events = {
event_id: event
for event_id, event in missing_events.items()
if wanted_event_type(event, self.detection_types, self.cameras, self.ignore_cameras)
}
# Yeild events one by one to allow the async loop to start other task while
# waiting on the full list of events
for event in wanted_events.values():
yield event
# Last chunk was in-complete, we can stop now
if len(events_chunk) < chunk_size:
break
async def ignore_missing(self):
"""Ignore missing events by adding them to the event table."""
logger.info(" Ignoring missing events")
async for event in self._get_missing_events():
logger.extra_debug(f"Ignoring event '{event.id}'")
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
await self._db.commit()
async def start(self):
"""Run main loop."""
logger.info("Starting Missing Event Checker")
while True:
try:
shown_warning = False
# Wait for unifi protect to be connected
await self._protect.connect_event.wait()
logger.debug("Running check for missing events...")
async for event in self._get_missing_events():
if not shown_warning:
logger.warning(" Found missing events, adding to backup queue")
shown_warning = True
if event.type != EventType.SMART_DETECT:
event_name = f"{event.id} ({event.type.value})"
else:
event_name = f"{event.id} ({', '.join(event.smart_detect_types)})"
logger.extra_debug(
f" Adding missing event to backup queue: {event_name}"
f" ({event.start.strftime('%Y-%m-%dT%H-%M-%S')} -"
f" {event.end.strftime('%Y-%m-%dT%H-%M-%S')})"
)
await self._download_queue.put(event)
except Exception as e:
logger.error(
"Unexpected exception occurred during missing event check:",
exc_info=e,
)
await asyncio.sleep(self.interval)

View File

@@ -0,0 +1,18 @@
"""A 'singleton' module for registering apprise notifiers."""
import apprise
notifier = apprise.Apprise()
def add_notification_service(url):
"""Add apprise URI with support for tags e.g. TAG1,TAG2=PROTOCOL://settings."""
config = apprise.AppriseConfig()
config.add_config(url, format="text")
# If not tags are specified, default to errors otherwise ALL logging will
# be spammed to the notification service
if not config.servers()[0].tags:
config.servers()[0].tags = {"ERROR"}
notifier.add(config)

View File

@@ -0,0 +1,185 @@
# noqa: D100
import logging
import time
from datetime import datetime
import json
import asyncio
import aiosqlite
from dateutil.relativedelta import relativedelta
from unifi_protect_backup.utils import run_command, wait_until, human_readable_size
logger = logging.getLogger(__name__)
async def delete_file(file_path, rclone_purge_args):
"""Delete `file_path` via rclone."""
returncode, stdout, stderr = await run_command(f'rclone delete -vv "{file_path}" {rclone_purge_args}')
if returncode != 0:
logger.error(f" Failed to delete file: '{file_path}'")
async def tidy_empty_dirs(base_dir_path):
"""Delete any empty directories in `base_dir_path` via rclone."""
returncode, stdout, stderr = await run_command(f'rclone rmdirs -vv --ignore-errors --leave-root "{base_dir_path}"')
if returncode != 0:
logger.error(" Failed to tidy empty dirs")
class Purge:
"""Deletes old files from rclone remotes."""
def __init__(
self,
db: aiosqlite.Connection,
retention: relativedelta,
rclone_destination: str,
interval: relativedelta | None,
rclone_purge_args: str = "",
):
"""Init.
Args:
db (aiosqlite.Connection): Async SQlite database connection to purge clips from
retention (relativedelta): How long clips should be kept
rclone_destination (str): What rclone destination the clips are stored in
interval (relativedelta): How often to purge old clips
rclone_purge_args (str): Optional extra arguments to pass to `rclone delete` directly.
"""
self._db: aiosqlite.Connection = db
self.retention: relativedelta = retention
self.rclone_destination: str = rclone_destination
self.interval: relativedelta = interval if interval is not None else relativedelta(days=1)
self.rclone_purge_args: str = rclone_purge_args
async def start(self):
"""Run main loop."""
while True:
try:
deleted_a_file = False
# For every event older than the retention time
retention_oldest_time = time.mktime((datetime.now() - self.retention).timetuple())
async with self._db.execute(
f"SELECT * FROM events WHERE end < {retention_oldest_time}"
) as event_cursor:
async for event_id, event_type, camera_id, event_start, event_end in event_cursor: # noqa: B007
logger.info(f"Purging event: {event_id}.")
# For every backup for this event
async with self._db.execute(f"SELECT * FROM backups WHERE id = '{event_id}'") as backup_cursor:
async for _, remote, file_path in backup_cursor:
logger.debug(f" Deleted: {remote}:{file_path}")
await delete_file(f"{remote}:{file_path}", self.rclone_purge_args)
deleted_a_file = True
# delete event from database
# entries in the `backups` table are automatically deleted by sqlite triggers
await self._db.execute(f"DELETE FROM events WHERE id = '{event_id}'")
await self._db.commit()
if deleted_a_file:
await tidy_empty_dirs(self.rclone_destination)
except Exception as e:
logger.error("Unexpected exception occurred during purge:", exc_info=e)
next_purge_time = datetime.now() + self.interval
logger.extra_debug(f"sleeping until {next_purge_time}")
await wait_until(next_purge_time)
async def get_utilisation(rclone_destination):
"""Get storage utilisation of rclone destination.
Args:
rclone_destination (str): What rclone destination the clips are stored in
"""
returncode, stdout, stderr = await run_command(f"rclone size {rclone_destination} --json")
if returncode != 0:
logger.error(f" Failed to get size of: '{rclone_destination}'")
return json.loads(stdout)["bytes"]
class StorageQuotaPurge:
"""Enforces maximum storage ultisation qutoa."""
def __init__(
self,
db: aiosqlite.Connection,
quota: int,
upload_event: asyncio.Event,
rclone_destination: str,
rclone_purge_args: str = "",
):
"""Init."""
self._db = db
self.quota = quota
self._upload_event = upload_event
self.rclone_destination = rclone_destination
self.rclone_purge_args = rclone_purge_args
async def start(self):
"""Run main loop."""
while True:
try:
# Wait for the uploaders to tell us there has been an upload
await self._upload_event.wait()
deleted_a_file = False
# While we exceed the storage quota
utilisation = await get_utilisation(self.rclone_destination)
while utilisation > self.quota:
# Get the oldest event
async with self._db.execute("SELECT id FROM events ORDER BY end ASC LIMIT 1") as event_cursor:
row = await event_cursor.fetchone()
if row is None:
logger.warning(
"Storage quota exceeded, but there are no events in the database"
" - Do you have stray files?"
)
break
event_id = row[0]
if (
not deleted_a_file
): # Only show this message once when the quota is exceeded till we drop below it again
logger.info(
f"Storage quota {human_readable_size(utilisation)}/{human_readable_size(self.quota)} "
"exceeded, purging oldest events"
)
# Get all the backups for this event
async with self._db.execute(f"SELECT * FROM backups WHERE id = '{event_id}'") as backup_cursor:
# Delete them
async for _, remote, file_path in backup_cursor:
logger.debug(f" Deleted: {remote}:{file_path}")
await delete_file(f"{remote}:{file_path}", self.rclone_purge_args)
deleted_a_file = True
# delete event from database
# entries in the `backups` table are automatically deleted by sqlite triggers
await self._db.execute(f"DELETE FROM events WHERE id = '{event_id}'")
await self._db.commit()
utilisation = await get_utilisation(self.rclone_destination)
logger.debug(
f"Storage utlisation: {human_readable_size(utilisation)}/{human_readable_size(self.quota)}"
)
if deleted_a_file:
await tidy_empty_dirs(self.rclone_destination)
logger.info(
"Storage utlisation back below quota limit: "
f"{human_readable_size(utilisation)}/{human_readable_size(self.quota)}"
)
self._upload_event.clear()
except Exception as e:
logger.error("Unexpected exception occurred during purge:", exc_info=e)

View File

@@ -0,0 +1,139 @@
"""Monkey patch new download method into uiprotect till PR is merged."""
import enum
from datetime import datetime
from pathlib import Path
from typing import Any, Optional
import aiofiles
from uiprotect.data import Version
from uiprotect.exceptions import BadRequest
from uiprotect.utils import to_js_time
class VideoExportType(str, enum.Enum):
"""Unifi Protect video export types."""
TIMELAPSE = "timelapse"
ROTATING = "rotating"
def monkey_patch_experimental_downloader():
"""Apply patches to uiprotect to add new download method."""
from uiprotect.api import ProtectApiClient
# Add the version constant
ProtectApiClient.NEW_DOWNLOAD_VERSION = Version("4.0.0") # You'll need to import Version from uiprotect
async def _validate_channel_id(self, camera_id: str, channel_index: int) -> None:
if self._bootstrap is None:
await self.update()
try:
camera = self._bootstrap.cameras[camera_id]
camera.channels[channel_index]
except (IndexError, AttributeError, KeyError) as e:
raise BadRequest(f"Invalid input: {e}") from e
async def prepare_camera_video(
self,
camera_id: str,
start: datetime,
end: datetime,
channel_index: int = 0,
validate_channel_id: bool = True,
fps: Optional[int] = None,
filename: Optional[str] = None,
) -> Optional[dict[str, Any]]:
if self.bootstrap.nvr.version < self.NEW_DOWNLOAD_VERSION:
raise ValueError("This method is only support from Unifi Protect version >= 4.0.0.")
if validate_channel_id:
await self._validate_channel_id(camera_id, channel_index)
params = {
"camera": camera_id,
"start": to_js_time(start),
"end": to_js_time(end),
}
if channel_index == 3:
params.update({"lens": 2})
else:
params.update({"channel": channel_index})
if fps is not None and fps > 0:
params["fps"] = fps
params["type"] = VideoExportType.TIMELAPSE.value
else:
params["type"] = VideoExportType.ROTATING.value
if not filename:
start_str = start.strftime("%m-%d-%Y, %H.%M.%S %Z")
end_str = end.strftime("%m-%d-%Y, %H.%M.%S %Z")
filename = f"{camera_id} {start_str} - {end_str}.mp4"
params["filename"] = filename
return await self.api_request(
"video/prepare",
params=params,
raise_exception=True,
)
async def download_camera_video(
self,
camera_id: str,
filename: str,
output_file: Optional[Path] = None,
iterator_callback: Optional[callable] = None,
progress_callback: Optional[callable] = None,
chunk_size: int = 65536,
) -> Optional[bytes]:
if self.bootstrap.nvr.version < self.NEW_DOWNLOAD_VERSION:
raise ValueError("This method is only support from Unifi Protect version >= 4.0.0.")
params = {
"camera": camera_id,
"filename": filename,
}
if iterator_callback is None and progress_callback is None and output_file is None:
return await self.api_request_raw(
"video/download",
params=params,
raise_exception=False,
)
r = await self.request(
"get",
f"{self.api_path}video/download",
auto_close=False,
timeout=0,
params=params,
)
if output_file is not None:
async with aiofiles.open(output_file, "wb") as output:
async def callback(total: int, chunk: Optional[bytes]) -> None:
if iterator_callback is not None:
await iterator_callback(total, chunk)
if chunk is not None:
await output.write(chunk)
await self._stream_response(r, chunk_size, callback, progress_callback)
else:
await self._stream_response(
r,
chunk_size,
iterator_callback,
progress_callback,
)
r.close()
return None
# Patch the methods into the class
ProtectApiClient._validate_channel_id = _validate_channel_id
ProtectApiClient.prepare_camera_video = prepare_camera_video
ProtectApiClient.download_camera_video = download_camera_video

View File

@@ -1,463 +0,0 @@
"""Main module."""
import asyncio
import logging
import pathlib
import shutil
from typing import Callable, List, Optional
import aiocron
import aiohttp
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.nvr import Event
from pyunifiprotect.data.types import EventType, ModelType
from pyunifiprotect.data.websocket import WSAction, WSSubscriptionMessage
logger = logging.getLogger(__name__)
class RcloneException(Exception):
"""Exception class for when rclone does not exit with `0`."""
def __init__(self, stdout, stderr, returncode):
"""Exception class for when rclone does not exit with `0`.
Args:
stdout (str): What rclone output to stdout
stderr (str): What rclone output to stderr
returncode (str): The return code of the rclone process
"""
super().__init__()
self.stdout: str = stdout
self.stderr: str = stderr
self.returncode: int = returncode
def __str__(self):
"""Turns excpetion into a human readable form."""
return f"Return Code: {self.returncode}\nStdout:\n{self.stdout}\nStderr:\n{self.stderr}"
def add_logging_level(levelName: str, levelNum: int, methodName: Optional[str] = None) -> None:
"""Comprehensively adds a new logging level to the `logging` module and the currently configured logging class.
`levelName` becomes an attribute of the `logging` module with the value
`levelNum`. `methodName` becomes a convenience method for both `logging`
itself and the class returned by `logging.getLoggerClass()` (usually just
`logging.Logger`).
To avoid accidental clobbering of existing attributes, this method will
raise an `AttributeError` if the level name is already an attribute of the
`logging` module or if the method name is already present
Credit: https://stackoverflow.com/a/35804945
Args:
levelName (str): The name of the new logging level (in all caps).
levelNum (int): The priority value of the logging level, lower=more verbose.
methodName (str): The name of the method used to log using this.
If `methodName` is not specified, `levelName.lower()` is used.
Example:
::
>>> add_logging_level('TRACE', logging.DEBUG - 5)
>>> logging.getLogger(__name__).setLevel("TRACE")
>>> logging.getLogger(__name__).trace('that worked')
>>> logging.trace('so did this')
>>> logging.TRACE
5
"""
if not methodName:
methodName = levelName.lower()
if hasattr(logging, levelName):
raise AttributeError('{} already defined in logging module'.format(levelName))
if hasattr(logging, methodName):
raise AttributeError('{} already defined in logging module'.format(methodName))
if hasattr(logging.getLoggerClass(), methodName):
raise AttributeError('{} already defined in logger class'.format(methodName))
# This method was inspired by the answers to Stack Overflow post
# http://stackoverflow.com/q/2183233/2988730, especially
# http://stackoverflow.com/a/13638084/2988730
def logForLevel(self, message, *args, **kwargs):
if self.isEnabledFor(levelNum):
self._log(levelNum, message, args, **kwargs)
def logToRoot(message, *args, **kwargs):
logging.log(levelNum, message, *args, **kwargs)
logging.addLevelName(levelNum, levelName)
setattr(logging, levelName, levelNum)
setattr(logging.getLoggerClass(), methodName, logForLevel)
setattr(logging, methodName, logToRoot)
def setup_logging(verbosity: int) -> None:
"""Configures loggers to provided the desired level of verbosity.
Verbosity 0: Only log info messages created by `unifi-protect-backup`, and all warnings
verbosity 1: Only log info & debug messages created by `unifi-protect-backup`, and all warnings
verbosity 2: Log info & debug messages created by `unifi-protect-backup`, command output, and
all warnings
Verbosity 3: Log debug messages created by `unifi-protect-backup`, command output, all info
messages, and all warnings
Verbosity 4: Log debug messages created by `unifi-protect-backup` command output, all info
messages, all warnings, and websocket data
Verbosity 5: Log websocket data, command output, all debug messages, all info messages and all
warnings
Args:
verbosity (int): The desired level of verbosity
"""
add_logging_level(
'EXTRA_DEBUG',
logging.DEBUG - 1,
)
add_logging_level(
'WEBSOCKET_DATA',
logging.DEBUG - 2,
)
format = "{asctime} [{levelname}]:{name: <20}:\t{message}"
date_format = "%Y-%m-%d %H:%M:%S"
style = '{'
if verbosity == 0:
logging.basicConfig(level=logging.WARN, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.INFO)
elif verbosity == 1:
logging.basicConfig(level=logging.WARN, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.DEBUG)
elif verbosity == 2:
logging.basicConfig(level=logging.WARN, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 3:
logging.basicConfig(level=logging.INFO, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 4:
logging.basicConfig(level=logging.INFO, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
elif verbosity == 5:
logging.basicConfig(level=logging.DEBUG, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
def human_readable_size(num):
"""Turns a number into a human readable number with ISO/IEC 80000 binary prefixes.
Based on: https://stackoverflow.com/a/1094933
Args:
num (int): The number to be converted into human readable format
"""
for unit in ["B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"]:
if abs(num) < 1024.0:
return f"{num:3.1f}{unit}"
num /= 1024.0
raise ValueError("`num` too large, ran out of prefixes")
class UnifiProtectBackup:
"""Backup Unifi protect event clips using rclone.
Listens to the Unifi Protect websocket for events. When a completed motion or smart detection
event is detected, it will download the clip and back it up using rclone
Attributes:
retention (str): How long should event clips be backed up for. Format as per the
`--max-age` argument of `rclone`
(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)
ignore_cameras (List[str]): List of camera IDs for which to not backup events
verbose (int): How verbose to setup logging, see :func:`setup_logging` for details.
_download_queue (asyncio.Queue): Queue of events that need to be backed up
_unsub (Callable): Unsubscribe from the websocket callback
"""
def __init__(
self,
address: str,
username: str,
password: str,
verify_ssl: bool,
rclone_destination: str,
retention: str,
ignore_cameras: List[str],
verbose: int,
port: int = 443,
):
"""Will configure logging settings and the Unifi Protect API (but not actually connect).
Args:
address (str): Base address of the Unifi Protect instance
port (int): Post of the Unifi Protect instance, usually 443
username (str): Username to log into Unifi Protect instance
password (str): Password for Unifi Protect user
verify_ssl (bool): Flag for if SSL certificates should be validated
rclone_destination (str): `rclone` destination path in the format
{rclone remote}:{path on remote}. E.g.
`gdrive:/backups/unifi_protect`
retention (str): How long should event clips be backed up for. Format as per the
`--max-age` argument of `rclone`
(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)
ignore_cameras (List[str]): List of camera IDs for which to not backup events
verbose (int): How verbose to setup logging, see :func:`setup_logging` for details.
"""
setup_logging(verbose)
logger.debug("Config:")
logger.debug(f" {address=}")
logger.debug(f" {port=}")
logger.debug(f" {username=}")
if verbose < 5:
logger.debug(" password=REDACTED")
else:
logger.debug(f" {password=}")
logger.debug(f" {verify_ssl=}")
logger.debug(f" {rclone_destination=}")
logger.debug(f" {retention=}")
logger.debug(f" {ignore_cameras=}")
logger.debug(f" {verbose=}")
self.rclone_destination = rclone_destination
self.retention = retention
self._protect = ProtectApiClient(
address,
port,
username,
password,
verify_ssl=verify_ssl,
subscribed_models={ModelType.EVENT},
)
self.ignore_cameras = ignore_cameras
self._download_queue: asyncio.Queue = asyncio.Queue()
self._unsub: Callable[[], None]
async def start(self):
"""Bootstrap the backup process and kick off the main loop.
You should run this to start the realtime backup of Unifi Protect clips as they are created
"""
logger.info("Starting...")
# Ensure rclone is installed and properly configured
logger.info("Checking rclone configuration...")
await self._check_rclone()
# Start the pyunifiprotect connection by calling `update`
logger.info("Connecting to Unifi Protect...")
await self._protect.update()
logger.info("Found cameras:")
for camera in self._protect.bootstrap.cameras.values():
logger.info(f" - {camera.id}: {camera.name}")
# Subscribe to the websocket
self._unsub = self._protect.subscribe_websocket(self._websocket_callback)
# Set up a "purge" task to run at midnight each day to delete old recordings and empty directories
logger.info("Setting up purge task...")
@aiocron.crontab("0 0 * * *")
async def rclone_purge_old():
logger.info("Deleting old files...")
cmd = f"rclone delete -vv --min-age {self.retention} '{self.rclone_destination}'"
cmd += f" && rclone rmdirs -vv --leave-root '{self.rclone_destination}'"
proc = await asyncio.create_subprocess_shell(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate()
if proc.returncode == 0:
logger.extra_debug(f"stdout:\n{stdout.decode()}") # type: ignore
logger.extra_debug(f"stderr:\n{stderr.decode()}") # type: ignore
logger.info("Successfully deleted old files")
else:
logger.warn("Failed to purge old files")
logger.warn(f"stdout:\n{stdout.decode()}")
logger.warn(f"stderr:\n{stderr.decode()}")
# Launches the main loop
logger.info("Listening for events...")
await self._backup_events()
logger.info("Stopping...")
# Unsubscribes from the websocket
self._unsub()
async def _check_rclone(self) -> None:
"""Check if rclone is installed and the specified remote is configured.
Raises:
RcloneException: If rclone is not installed or it failed to list remotes
ValueError: The given rclone destination is for a remote that is not configured
"""
rclone = shutil.which('rclone')
logger.debug(f"rclone found: {rclone}")
if not rclone:
raise RuntimeError("`rclone` is not installed on this system")
cmd = "rclone listremotes -vv"
proc = await asyncio.create_subprocess_shell(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate()
logger.extra_debug(f"stdout:\n{stdout.decode()}") # type: ignore
logger.extra_debug(f"stderr:\n{stderr.decode()}") # type: ignore
if proc.returncode != 0:
raise RcloneException(stdout.decode(), stderr.decode(), proc.returncode)
# Check if the destination is for a configured remote
for line in stdout.splitlines():
if self.rclone_destination.startswith(line.decode()):
break
else:
remote = self.rclone_destination.split(":")[0]
raise ValueError(f"rclone does not have a remote called `{remote}`")
def _websocket_callback(self, msg: WSSubscriptionMessage) -> None:
"""Callback for "EVENT" websocket messages.
Filters the incoming events, and puts completed events onto the download queue
Args:
msg (Event): Incoming event data
"""
logger.websocket_data(msg) # type: ignore
# We are only interested in updates that end motion/smartdetection event
assert isinstance(msg.new_obj, Event)
if msg.action != WSAction.UPDATE:
return
if msg.new_obj.camera_id in self.ignore_cameras:
return
if msg.new_obj.end is None:
return
if msg.new_obj.type not in {EventType.MOTION, EventType.SMART_DETECT}:
return
self._download_queue.put_nowait(msg.new_obj)
logger.debug(f"Adding event {msg.new_obj.id} to queue (Current queue={self._download_queue.qsize()})")
async def _backup_events(self) -> None:
"""Main loop for backing up events.
Waits for an event in the queue, then downloads the corresponding clip and uploads it using rclone.
If errors occur it will simply log the errors and wait for the next event. In a future release,
retries will be added.
"""
while True:
event = await self._download_queue.get()
destination = self.generate_file_path(event)
logger.info(f"Backing up event: {event.id}")
logger.debug(f"Remaining Queue: {self._download_queue.qsize()}")
logger.debug(f" Camera: {self._protect.bootstrap.cameras[event.camera_id].name}")
logger.debug(f" Type: {event.type}")
logger.debug(f" Start: {event.start.strftime('%Y-%m-%dT%H-%M-%S')}")
logger.debug(f" End: {event.end.strftime('%Y-%m-%dT%H-%M-%S')}")
logger.debug(f" Duration: {event.end-event.start}")
try:
# Download video
logger.debug(" Downloading video...")
for x in range(5):
try:
video = await self._protect.get_camera_video(event.camera_id, event.start, event.end)
assert isinstance(video, bytes)
break
except (AssertionError, aiohttp.client_exceptions.ClientPayloadError) as e:
logger.warn(f" Failed download attempt {x+1}, retying in 1s")
logger.exception(e)
await asyncio.sleep(1)
else:
logger.warn(f"Download failed after 5 attempts, abandoning event {event.id}:")
continue
logger.debug(" Uploading video via rclone...")
logger.debug(f" To: {destination}")
logger.debug(f" Size: {human_readable_size(len(video))}")
for x in range(5):
try:
await self._upload_video(video, destination)
break
except RcloneException as e:
logger.warn(f" Failed upload attempt {x+1}, retying in 1s")
logger.exception(e)
await asyncio.sleep(1)
else:
logger.warn(f"Upload failed after 5 attempts, abandoning event {event.id}:")
continue
logger.info("Backed up successfully!")
except Exception as e:
logger.warn(f"Unexpected exception occurred, abandoning event {event.id}:")
logger.exception(e)
async def _upload_video(self, video: bytes, destination: pathlib.Path):
"""Upload video using rclone.
In order to avoid writing to disk, the video file data is piped directly
to the rclone process and uploaded using the `rcat` function of rclone.
Args:
video (bytes): The data to be written to the file
destination (pathlib.Path): Where rclone should write the file
Raises:
RuntimeError: If rclone returns a non-zero exit code
"""
cmd = f"rclone rcat -vv '{destination}'"
proc = await asyncio.create_subprocess_shell(
cmd,
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate(video)
if proc.returncode == 0:
logger.extra_debug(f"stdout:\n{stdout.decode()}") # type: ignore
logger.extra_debug(f"stderr:\n{stderr.decode()}") # type: ignore
else:
raise RcloneException(stdout.decode(), stderr.decode(), proc.returncode)
def generate_file_path(self, event: Event) -> pathlib.Path:
"""Generates the rclone destination path for the provided event.
Generates paths in the following structure:
::
rclone_destination
|- Camera Name
|- {Date}
|- {start timestamp} {event type} ({detections}).mp4
Args:
event: The event for which to create an output path
Returns:
pathlib.Path: The rclone path the event should be backed up to
"""
path = pathlib.Path(self.rclone_destination)
assert isinstance(event.camera_id, str)
path /= self._protect.bootstrap.cameras[event.camera_id].name # directory per camera
path /= event.start.strftime("%Y-%m-%d") # Directory per day
file_name = f"{event.start.strftime('%Y-%m-%dT%H-%M-%S')} {event.type}"
if event.smart_detect_types:
detections = " ".join(event.smart_detect_types)
file_name += f" ({detections})"
file_name += ".mp4"
path /= file_name
return path

View File

@@ -0,0 +1,390 @@
"""Main module."""
import asyncio
import logging
import os
import shutil
from datetime import datetime, timedelta, timezone
from typing import Callable, List
import aiosqlite
from dateutil.relativedelta import relativedelta
from uiprotect import ProtectApiClient
from uiprotect.data.types import ModelType
from unifi_protect_backup import (
EventListener,
MissingEventChecker,
Purge,
StorageQuotaPurge,
VideoDownloader,
VideoDownloaderExperimental,
VideoUploader,
notifications,
)
from unifi_protect_backup.utils import (
SubprocessException,
VideoQueue,
human_readable_size,
run_command,
setup_logging,
)
from unifi_protect_backup.uiprotect_patch import monkey_patch_experimental_downloader
logger = logging.getLogger(__name__)
# TODO: https://github.com/cjrh/aiorun#id6 (smart shield)
# We have been waiting for a long time for this PR to get merged
# https://github.com/uilibs/uiprotect/pull/249
# Since it has not progressed, we will for now patch in the functionality ourselves
monkey_patch_experimental_downloader()
async def create_database(path: str):
"""Create sqlite database and creates the events abd backups tables."""
db = await aiosqlite.connect(path)
await db.execute("CREATE TABLE events(id PRIMARY KEY, type, camera_id, start REAL, end REAL)")
await db.execute(
"CREATE TABLE backups(id REFERENCES events(id) ON DELETE CASCADE, remote, path, PRIMARY KEY (id, remote))"
)
await db.commit()
return db
class UnifiProtectBackup:
"""Backup Unifi protect event clips using rclone.
Listens to the Unifi Protect websocket for events. When a completed motion or smart detection
event is detected, it will download the clip and back it up using rclone
"""
def __init__(
self,
address: str,
username: str,
password: str,
verify_ssl: bool,
rclone_destination: str,
retention: relativedelta,
rclone_args: str,
rclone_purge_args: str,
detection_types: List[str],
ignore_cameras: List[str],
cameras: List[str],
file_structure_format: str,
verbose: int,
download_buffer_size: int,
purge_interval: relativedelta,
apprise_notifiers: str,
skip_missing: bool,
max_event_length: int,
sqlite_path: str = "events.sqlite",
color_logging: bool = False,
download_rate_limit: float | None = None,
port: int = 443,
use_experimental_downloader: bool = False,
parallel_uploads: int = 1,
storage_quota: int | None = None,
):
"""Will configure logging settings and the Unifi Protect API (but not actually connect).
Args:
address (str): Base address of the Unifi Protect instance
port (int): Post of the Unifi Protect instance, usually 443
username (str): Username to log into Unifi Protect instance
password (str): Password for Unifi Protect user
verify_ssl (bool): Flag for if SSL certificates should be validated
rclone_destination (str): `rclone` destination path in the format
{rclone remote}:{path on remote}. E.g.
`gdrive:/backups/unifi_protect`
retention (str): How long should event clips be backed up for. Format as per the
`--max-age` argument of `rclone`
(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)
rclone_args (str): A bandwidth limit which is passed to the `--bwlimit` argument of
`rclone` (https://rclone.org/docs/#bwlimit-bandwidth-spec)
rclone_purge_args (str): Optional extra arguments to pass to `rclone delete` directly.
detection_types (List[str]): List of which detection types to backup.
ignore_cameras (List[str]): List of camera IDs for which to not backup events.
cameras (List[str]): List of ONLY camera IDs for which to backup events.
file_structure_format (str): A Python format string for output file path.
verbose (int): How verbose to setup logging, see :func:`setup_logging` for details.
download_buffer_size (int): How many bytes big the download buffer should be
purge_interval (str): How often to check for files to delete
apprise_notifiers (str): Apprise URIs for notifications
skip_missing (bool): If initial missing events should be ignored
sqlite_path (str): Path where to find/create sqlite database
color_logging (bool): Whether to add color to logging output or not
download_rate_limit (float): Limit how events can be downloaded in one minute. Disabled by default",
max_event_length (int): Maximum length in seconds for an event to be considered valid and downloaded
use_experimental_downloader (bool): Use the new experimental downloader (the same method as used by the
webUI)
parallel_uploads (int): Max number of parallel uploads to allow
storage_quota (int): Maximum storage utilisation in bytes
"""
self.color_logging = color_logging
setup_logging(verbose, self.color_logging)
for notifier in apprise_notifiers:
try:
notifications.add_notification_service(notifier)
except Exception as e:
logger.error(f"Error occurred when setting up logger `{notifier}`", exc_info=e)
raise
logger.debug("Config:")
logger.debug(f" {address=}")
logger.debug(f" {port=}")
logger.debug(f" {username=}")
if verbose < 5:
logger.debug(" password=REDACTED")
else:
logger.debug(f" {password=}")
logger.debug(f" {verify_ssl=}")
logger.debug(f" {rclone_destination=}")
logger.debug(f" {retention=}")
logger.debug(f" {rclone_args=}")
logger.debug(f" {rclone_purge_args=}")
logger.debug(f" {ignore_cameras=}")
logger.debug(f" {cameras=}")
logger.debug(f" {verbose=}")
logger.debug(f" {detection_types=}")
logger.debug(f" {file_structure_format=}")
logger.debug(f" {sqlite_path=}")
logger.debug(f" download_buffer_size={human_readable_size(download_buffer_size)}")
logger.debug(f" {purge_interval=}")
logger.debug(f" {apprise_notifiers=}")
logger.debug(f" {skip_missing=}")
logger.debug(f" {download_rate_limit=} events per minute")
logger.debug(f" {max_event_length=}s")
logger.debug(f" {use_experimental_downloader=}")
logger.debug(f" {parallel_uploads=}")
logger.debug(f" {storage_quota=}")
self.rclone_destination = rclone_destination
self.retention = retention
self.rclone_args = rclone_args
self.rclone_purge_args = rclone_purge_args
self.file_structure_format = file_structure_format
self.address = address
self.port = port
self.username = username
self.password = password
self.verify_ssl = verify_ssl
self._protect = ProtectApiClient(
self.address,
self.port,
self.username,
self.password,
verify_ssl=self.verify_ssl,
subscribed_models={ModelType.EVENT},
)
self.ignore_cameras = set(ignore_cameras)
self.cameras = set(cameras)
self._download_queue: asyncio.Queue = asyncio.Queue()
self._unsub: Callable[[], None]
self.detection_types = set(detection_types)
self._has_ffprobe = False
self._sqlite_path = sqlite_path
self._db = None
self._download_buffer_size = download_buffer_size
self._purge_interval = purge_interval
self._skip_missing = skip_missing
self._download_rate_limit = download_rate_limit
self._max_event_length = timedelta(seconds=max_event_length)
self._use_experimental_downloader = use_experimental_downloader
self._parallel_uploads = parallel_uploads
self._storage_quota = storage_quota
async def start(self):
"""Bootstrap the backup process and kick off the main loop.
You should run this to start the realtime backup of Unifi Protect clips as they are created
"""
try:
logger.info("Starting...")
if notifications.notifier.servers:
await notifications.notifier.async_notify("Starting UniFi Protect Backup")
# Ensure `rclone` is installed and properly configured
logger.info("Checking rclone configuration...")
await self._check_rclone()
# Start the uiprotect connection by calling `update`
logger.info("Connecting to Unifi Protect...")
delay = 5 # Start with a 5 second delay
max_delay = 3600 # 1 hour in seconds
for _ in range(20):
try:
await self._protect.update()
break
except Exception as e:
logger.warning(
f"Failed to connect to UniFi Protect, retrying in {delay}s...",
exc_info=e,
)
await asyncio.sleep(delay)
delay = min(max_delay, delay * 2) # Double the delay but do not exceed max_delay
else:
raise ConnectionError("Failed to connect to UniFi Protect after 20 attempts")
# Add a lock to the protect client that can be used to prevent code accessing the client when it has
# lost connection
self._protect.connect_event = asyncio.Event()
self._protect.connect_event.set()
# Get a mapping of camera ids -> names
logger.info("Found cameras:")
for camera in self._protect.bootstrap.cameras.values():
logger.info(f" - {camera.id}: {camera.name}")
# Print timezone info for debugging
logger.debug(f"NVR TZ: {self._protect.bootstrap.nvr.timezone}")
logger.debug(f"Local TZ: {datetime.now(timezone.utc).astimezone().tzinfo}")
tasks = []
if not os.path.exists(self._sqlite_path):
logger.info("Database doesn't exist, creating a new one")
self._db = await create_database(self._sqlite_path)
else:
self._db = await aiosqlite.connect(self._sqlite_path)
download_queue = asyncio.Queue()
upload_queue = VideoQueue(self._download_buffer_size)
# Enable foreign keys in the database
await self._db.execute("PRAGMA foreign_keys = ON;")
# Create downloader task
# This will download video files to its buffer
if self._use_experimental_downloader:
downloader_cls = VideoDownloaderExperimental
else:
downloader_cls = VideoDownloader
downloader = downloader_cls(
self._protect,
self._db,
download_queue,
upload_queue,
self.color_logging,
self._download_rate_limit,
self._max_event_length,
)
tasks.append(downloader.start())
# Create upload tasks
# This will upload the videos in the downloader's buffer to the rclone remotes and log it in the database
uploaders = []
for _ in range(self._parallel_uploads):
uploader = VideoUploader(
self._protect,
upload_queue,
self.rclone_destination,
self.rclone_args,
self.file_structure_format,
self._db,
self.color_logging,
)
uploaders.append(uploader)
tasks.append(uploader.start())
# Create event listener task
# This will connect to the unifi protect websocket and listen for events. When one is detected it will
# be added to the queue of events to download
event_listener = EventListener(
download_queue, self._protect, self.detection_types, self.ignore_cameras, self.cameras
)
tasks.append(event_listener.start())
# Create purge task
# This will, every _purge_interval, purge old backups from the rclone remotes and database
purge = Purge(
self._db,
self.retention,
self.rclone_destination,
self._purge_interval,
self.rclone_purge_args,
)
tasks.append(purge.start())
if self._storage_quota is not None:
storage_quota_purger = StorageQuotaPurge(
self._db,
self._storage_quota,
uploader.upload_signal,
self.rclone_destination,
self.rclone_purge_args,
)
tasks.append(storage_quota_purger.start())
# Create missing event task
# This will check all the events within the retention period, if any have been missed and not backed up
# they will be added to the event queue
missing = MissingEventChecker(
self._protect,
self._db,
download_queue,
downloader,
uploaders,
self.retention,
self.detection_types,
self.ignore_cameras,
self.cameras,
)
if self._skip_missing:
logger.info("Ignoring missing events")
await missing.ignore_missing()
tasks.append(missing.start())
logger.info("Starting Tasks...")
await asyncio.gather(*[asyncio.create_task(task) for task in tasks])
except asyncio.CancelledError:
if self._protect is not None:
await self._protect.close_session()
if self._db is not None:
await self._db.close()
except Exception as e:
logger.error("Unexpected exception occurred in main loop:", exc_info=e)
await asyncio.sleep(10) # Give remaining tasks a chance to complete e.g sending notifications
raise
async def _check_rclone(self) -> None:
"""Check if rclone is installed and the specified remote is configured.
Raises:
SubprocessException: If rclone is not installed or it failed to list remotes
ValueError: The given rclone destination is for a remote that is not configured
"""
rclone = shutil.which("rclone")
if not rclone:
raise RuntimeError("`rclone` is not installed on this system")
logger.debug(f"rclone found: {rclone}")
returncode, stdout, stderr = await run_command("rclone listremotes -vv")
if returncode != 0:
raise SubprocessException(stdout, stderr, returncode)
# Check if the destination is for a configured remote
for line in stdout.splitlines():
if self.rclone_destination.startswith(line):
break
else:
remote = self.rclone_destination.split(":")[0]
raise ValueError(f"rclone does not have a remote called `{remote}`")
# Ensure the base directory exists
await run_command(f"rclone mkdir -vv {self.rclone_destination}")

View File

@@ -0,0 +1,174 @@
# noqa: D100
import logging
import pathlib
import re
from datetime import datetime
import asyncio
import aiosqlite
from uiprotect import ProtectApiClient
from uiprotect.data.nvr import Event
from unifi_protect_backup.utils import (
SubprocessException,
VideoQueue,
get_camera_name,
human_readable_size,
run_command,
setup_event_logger,
)
class VideoUploader:
"""Uploads videos from the video_queue to the provided rclone destination.
Keeps a log of what its uploaded in `db`
"""
def __init__(
self,
protect: ProtectApiClient,
upload_queue: VideoQueue,
rclone_destination: str,
rclone_args: str,
file_structure_format: str,
db: aiosqlite.Connection,
color_logging: bool,
):
"""Init.
Args:
protect (ProtectApiClient): UniFi Protect API client to use
upload_queue (VideoQueue): Queue to get video files from
rclone_destination (str): rclone file destination URI
rclone_args (str): arguments to pass to the rclone command
file_structure_format (str): format string for how to structure the uploaded files
db (aiosqlite.Connection): Async SQlite database connection
color_logging (bool): Whether or not to add color to logging output
"""
self._protect: ProtectApiClient = protect
self.upload_queue: VideoQueue = upload_queue
self._rclone_destination: str = rclone_destination
self._rclone_args: str = rclone_args
self._file_structure_format: str = file_structure_format
self._db: aiosqlite.Connection = db
self.current_event = None
self._upload_signal = asyncio.Event()
self.base_logger = logging.getLogger(__name__)
setup_event_logger(self.base_logger, color_logging)
self.logger = logging.LoggerAdapter(self.base_logger, {"event": ""})
async def start(self):
"""Run main loop.
Runs forever looking for video data in the video queue and then uploads it
using rclone, finally it updates the database
"""
self.logger.info("Starting Uploader")
while True:
try:
event, video = await self.upload_queue.get()
self.current_event = event
self.logger = logging.LoggerAdapter(self.base_logger, {"event": f" [{event.id}]"})
self.logger.info(f"Uploading event: {event.id}")
self.logger.debug(
f" Remaining Upload Queue: {self.upload_queue.qsize_files()}"
f" ({human_readable_size(self.upload_queue.qsize())})"
)
destination = await self._generate_file_path(event)
self.logger.debug(f" Destination: {destination}")
try:
await self._upload_video(video, destination, self._rclone_args)
await self._update_database(event, destination)
self._upload_signal.set()
self.logger.debug("Uploaded")
except SubprocessException:
self.logger.error(f" Failed to upload file: '{destination}'")
self.current_event = None
except Exception as e:
self.logger.error(f"Unexpected exception occurred, abandoning event {event.id}:", exc_info=e)
async def _upload_video(self, video: bytes, destination: pathlib.Path, rclone_args: str):
"""Upload video using rclone.
In order to avoid writing to disk, the video file data is piped directly
to the rclone process and uploaded using the `rcat` function of rclone.
Args:
video (bytes): The data to be written to the file
destination (pathlib.Path): Where rclone should write the file
rclone_args (str): Optional extra arguments to pass to `rclone`
Raises:
RuntimeError: If rclone returns a non-zero exit code
"""
returncode, stdout, stderr = await run_command(f'rclone rcat -vv {rclone_args} "{destination}"', video)
if returncode != 0:
raise SubprocessException(stdout, stderr, returncode)
async def _update_database(self, event: Event, destination: str):
"""Add the backed up event to the database along with where it was backed up to."""
assert isinstance(event.start, datetime)
assert isinstance(event.end, datetime)
await self._db.execute(
"INSERT INTO events VALUES "
f"('{event.id}', '{event.type.value}', '{event.camera_id}',"
f"'{event.start.timestamp()}', '{event.end.timestamp()}')"
)
remote, file_path = str(destination).split(":")
await self._db.execute(
f"""INSERT INTO backups VALUES
('{event.id}', '{remote}', '{file_path}')
"""
)
await self._db.commit()
async def _generate_file_path(self, event: Event) -> pathlib.Path:
"""Generate the rclone destination path for the provided event.
Generates rclone destination path for the given even based upon the format string
in `self.file_structure_format`.
Provides the following fields to the format string:
event: The `Event` object as per
https://github.com/briis/uiprotect/blob/master/uiprotect/data/nvr.py
duration_seconds: The duration of the event in seconds
detection_type: A nicely formatted list of the event detection type and the smart detection types (if any)
camera_name: The name of the camera that generated this event
Args:
event: The event for which to create an output path
Returns:
pathlib.Path: The rclone path the event should be backed up to
"""
assert isinstance(event.camera_id, str)
assert isinstance(event.start, datetime)
assert isinstance(event.end, datetime)
format_context = {
"event": event,
"duration_seconds": (event.end - event.start).total_seconds(),
"detection_type": f"{event.type.value} ({' '.join(event.smart_detect_types)})"
if event.smart_detect_types
else f"{event.type.value}",
"camera_name": await get_camera_name(self._protect, event.camera_id),
}
file_path = self._file_structure_format.format(**format_context)
file_path = re.sub(r"[^\w\-_\.\(\)/ ]", "", file_path) # Sanitize any invalid chars
return pathlib.Path(f"{self._rclone_destination}/{file_path}")

View File

@@ -0,0 +1,492 @@
"""Utility functions used throughout the code, kept here to allow re use and/or minimize clutter elsewhere."""
import asyncio
import logging
import re
from datetime import datetime
from typing import Optional, Set
from apprise import NotifyType
from async_lru import alru_cache
from uiprotect import ProtectApiClient
from uiprotect.data.nvr import Event
from uiprotect.data.types import EventType, SmartDetectObjectType, SmartDetectAudioType
from unifi_protect_backup import notifications
logger = logging.getLogger(__name__)
def add_logging_level(levelName: str, levelNum: int, methodName: Optional[str] = None) -> None:
"""Comprehensively adds a new logging level to the `logging` module and the currently configured logging class.
`levelName` becomes an attribute of the `logging` module with the value
`levelNum`. `methodName` becomes a convenience method for both `logging`
itself and the class returned by `logging.getLoggerClass()` (usually just
`logging.Logger`).
To avoid accidental clobbering of existing attributes, this method will
raise an `AttributeError` if the level name is already an attribute of the
`logging` module or if the method name is already present
Credit: https://stackoverflow.com/a/35804945
Args:
levelName (str): The name of the new logging level (in all caps).
levelNum (int): The priority value of the logging level, lower=more verbose.
methodName (str): The name of the method used to log using this.
If `methodName` is not specified, `levelName.lower()` is used.
Example:
::
>>> add_logging_level('TRACE', logging.DEBUG - 5)
>>> logging.getLogger(__name__).setLevel("TRACE")
>>> logging.getLogger(__name__).trace('that worked')
>>> logging.trace('so did this')
>>> logging.TRACE
5
"""
if not methodName:
methodName = levelName.lower()
if hasattr(logging, levelName):
raise AttributeError("{} already defined in logging module".format(levelName))
if hasattr(logging, methodName):
raise AttributeError("{} already defined in logging module".format(methodName))
if hasattr(logging.getLoggerClass(), methodName):
raise AttributeError("{} already defined in logger class".format(methodName))
# This method was inspired by the answers to Stack Overflow post
# http://stackoverflow.com/q/2183233/2988730, especially
# http://stackoverflow.com/a/13638084/2988730
def logForLevel(self, message, *args, **kwargs):
if self.isEnabledFor(levelNum):
self._log(levelNum, message, args, **kwargs)
def logToRoot(message, *args, **kwargs):
logging.log(levelNum, message, *args, **kwargs)
def adapterLog(self, msg, *args, **kwargs):
"""Delegate an error call to the underlying logger."""
self.log(levelNum, msg, *args, **kwargs)
logging.addLevelName(levelNum, levelName)
setattr(logging, levelName, levelNum)
setattr(logging.getLoggerClass(), methodName, logForLevel)
setattr(logging, methodName, logToRoot)
setattr(logging.LoggerAdapter, methodName, adapterLog)
color_logging = False
def add_color_to_record_levelname(record):
"""Colorizes logging level names."""
levelno = record.levelno
if levelno >= logging.CRITICAL:
color = "\x1b[31;1m" # RED
elif levelno >= logging.ERROR:
color = "\x1b[31;1m" # RED
elif levelno >= logging.WARNING:
color = "\x1b[33;1m" # YELLOW
elif levelno >= logging.INFO:
color = "\x1b[32;1m" # GREEN
elif levelno >= logging.DEBUG:
color = "\x1b[36;1m" # CYAN
elif levelno >= logging.EXTRA_DEBUG:
color = "\x1b[35;1m" # MAGENTA
else:
color = "\x1b[0m"
return f"{color}{record.levelname}\x1b[0m"
class AppriseStreamHandler(logging.StreamHandler):
"""Logging handler that also sends logging output to configured Apprise notifiers."""
def __init__(self, color_logging: bool, *args, **kwargs):
"""Init.
Args:
color_logging (bool): If true logging levels will be colorized
*args (): Positional arguments to pass to StreamHandler
**kwargs: Keyword arguments to pass to StreamHandler
"""
super().__init__(*args, **kwargs)
self.color_logging = color_logging
def _emit_apprise(self, record):
try:
loop = asyncio.get_event_loop()
except RuntimeError:
return # There is no running loop
msg = self.format(record)
logging_map = {
logging.ERROR: NotifyType.FAILURE,
logging.WARNING: NotifyType.WARNING,
logging.INFO: NotifyType.INFO,
logging.DEBUG: NotifyType.INFO,
logging.EXTRA_DEBUG: NotifyType.INFO,
logging.WEBSOCKET_DATA: NotifyType.INFO,
}
# Only try notifying if there are notification servers configured
# and the asyncio loop isn't closed (aka we are quitting)
if notifications.notifier.servers and not loop.is_closed():
notify = notifications.notifier.async_notify(
body=msg,
title=record.levelname,
notify_type=logging_map[record.levelno],
tag=[record.levelname],
)
if loop.is_running():
asyncio.create_task(notify)
else:
loop.run_until_complete(notify)
def _emit_stream(self, record):
record.levelname = f"{record.levelname:^11s}" # Pad level name to max width
if self.color_logging:
record.levelname = add_color_to_record_levelname(record)
msg = self.format(record)
stream = self.stream
# issue 35046: merged two stream.writes into one.
stream.write(msg + self.terminator)
self.flush()
def emit(self, record):
"""Emit log to stdout and apprise."""
try:
self._emit_apprise(record)
except RecursionError: # See issue 36272
raise
except Exception:
self.handleError(record)
try:
self._emit_stream(record)
except RecursionError: # See issue 36272
raise
except Exception:
self.handleError(record)
def create_logging_handler(format, color_logging):
"""Construct apprise logging handler for the given format."""
date_format = "%Y-%m-%d %H:%M:%S"
style = "{"
sh = AppriseStreamHandler(color_logging)
formatter = logging.Formatter(format, date_format, style)
sh.setFormatter(formatter)
return sh
def setup_logging(verbosity: int, color_logging: bool = False) -> None:
"""Configure loggers to provided the desired level of verbosity.
Verbosity 0: Only log info messages created by `unifi-protect-backup`, and all warnings
verbosity 1: Only log info & debug messages created by `unifi-protect-backup`, and all warnings
verbosity 2: Log info & debug messages created by `unifi-protect-backup`, command output, and
all warnings
Verbosity 3: Log debug messages created by `unifi-protect-backup`, command output, all info
messages, and all warnings
Verbosity 4: Log debug messages created by `unifi-protect-backup` command output, all info
messages, all warnings, and websocket data
Verbosity 5: Log websocket data, command output, all debug messages, all info messages and all
warnings
Args:
verbosity (int): The desired level of verbosity
color_logging (bool): If colors should be used in the log (default=False)
"""
add_logging_level(
"EXTRA_DEBUG",
logging.DEBUG - 1,
)
add_logging_level(
"WEBSOCKET_DATA",
logging.DEBUG - 2,
)
format = "{asctime} [{levelname:^11s}] {name:<42} : {message}"
sh = create_logging_handler(format, color_logging)
logger = logging.getLogger("unifi_protect_backup")
logger.addHandler(sh)
logger.propagate = False
if verbosity == 0:
logging.basicConfig(level=logging.WARN, handlers=[sh])
logger.setLevel(logging.INFO)
elif verbosity == 1:
logging.basicConfig(level=logging.WARN, handlers=[sh])
logger.setLevel(logging.DEBUG)
elif verbosity == 2:
logging.basicConfig(level=logging.WARN, handlers=[sh])
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 3:
logging.basicConfig(level=logging.INFO, handlers=[sh])
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 4:
logging.basicConfig(level=logging.INFO, handlers=[sh])
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
elif verbosity >= 5:
logging.basicConfig(level=logging.DEBUG, handlers=[sh])
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
_initialized_loggers = []
def setup_event_logger(logger, color_logging):
"""Set up a logger that also displays the event ID currently being processed."""
global _initialized_loggers
if logger not in _initialized_loggers:
format = "{asctime} [{levelname:^11s}] {name:<42} :{event} {message}"
sh = create_logging_handler(format, color_logging)
logger.addHandler(sh)
logger.propagate = False
_initialized_loggers.append(logger)
_suffixes = ["B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"]
def human_readable_size(num: float):
"""Turn a number into a human readable number with ISO/IEC 80000 binary prefixes.
Based on: https://stackoverflow.com/a/1094933
Args:
num (int): The number to be converted into human readable format
"""
for unit in _suffixes:
if abs(num) < 1024.0:
return f"{num:3.1f}{unit}"
num /= 1024.0
raise ValueError("`num` too large, ran out of prefixes")
def human_readable_to_float(num: str):
"""Turn a human readable ISO/IEC 80000 suffix value to its full float value."""
pattern = r"([\d.]+)(" + "|".join(_suffixes) + ")"
result = re.match(pattern, num)
if result is None:
raise ValueError(f"Value '{num}' is not a valid ISO/IEC 80000 binary value")
value = float(result[1])
suffix = result[2]
multiplier = 1024 ** _suffixes.index(suffix)
return value * multiplier
# Cached so that actions like uploads can continue when the connection to the api is lost
# No max size, and a 6 hour ttl
@alru_cache(None, ttl=60 * 60 * 6)
async def get_camera_name(protect: ProtectApiClient, id: str):
"""Return the name for the camera with the given ID.
If the camera ID is not know, it tries refreshing the cached data
"""
# Wait for unifi protect to be connected
await protect.connect_event.wait() # type: ignore
try:
return protect.bootstrap.cameras[id].name
except KeyError:
# Refresh cameras
logger.debug(f"Unknown camera id: '{id}', checking API")
await protect.update()
try:
name = protect.bootstrap.cameras[id].name
except KeyError:
logger.debug(f"Unknown camera id: '{id}'")
raise
logger.debug(f"Found camera - {id}: {name}")
return name
class SubprocessException(Exception):
"""Class to capture: stdout, stderr, and return code of Subprocess errors."""
def __init__(self, stdout, stderr, returncode):
"""Exception class for when rclone does not exit with `0`.
Args:
stdout (str): What rclone output to stdout
stderr (str): What rclone output to stderr
returncode (str): The return code of the rclone process
"""
super().__init__()
self.stdout: str = stdout
self.stderr: str = stderr
self.returncode: int = returncode
def __str__(self):
"""Turn exception into a human readable form."""
return f"Return Code: {self.returncode}\nStdout:\n{self.stdout}\nStderr:\n{self.stderr}"
async def run_command(cmd: str, data=None):
"""Run the given command returning the exit code, stdout and stderr."""
proc = await asyncio.create_subprocess_shell(
cmd,
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate(data)
stdout = stdout.decode()
stdout_indented = "\t" + stdout.replace("\n", "\n\t").strip()
stderr = stderr.decode()
stderr_indented = "\t" + stderr.replace("\n", "\n\t").strip()
if proc.returncode != 0:
logger.error(f"Failed to run: '{cmd}")
logger.error(f"stdout:\n{stdout_indented}")
logger.error(f"stderr:\n{stderr_indented}")
else:
logger.extra_debug(f"stdout:\n{stdout_indented}") # type: ignore
logger.extra_debug(f"stderr:\n{stderr_indented}") # type: ignore
return proc.returncode, stdout, stderr
class VideoQueue(asyncio.Queue):
"""A queue that limits the number of bytes it can store rather than discrete entries."""
def __init__(self, *args, **kwargs):
"""Init."""
super().__init__(*args, **kwargs)
self._bytes_sum = 0
def qsize(self):
"""Get number of items in the queue."""
return self._bytes_sum
def qsize_files(self):
"""Get number of items in the queue."""
return super().qsize()
def _get(self):
data = self._queue.popleft()
self._bytes_sum -= len(data[1])
return data
def _put(self, item: tuple[Event, bytes]):
self._queue.append(item) # type: ignore
self._bytes_sum += len(item[1])
def full(self, item: tuple[Event, bytes] | None = None):
"""Return True if there are maxsize bytes in the queue.
optionally if `item` is provided, it will return False if there is enough space to
fit it, otherwise it will return True
Note: if the Queue was initialized with maxsize=0 (the default),
then full() is never True.
"""
if self._maxsize <= 0: # type: ignore
return False
else:
if item is None:
return self.qsize() >= self._maxsize # type: ignore
else:
return self.qsize() + len(item[1]) >= self._maxsize # type: ignore
async def put(self, item: tuple[Event, bytes]):
"""Put an item into the queue.
Put an item into the queue. If the queue is full, wait until a free
slot is available before adding item.
"""
if len(item[1]) > self._maxsize: # type: ignore
raise ValueError(
f"Item is larger ({human_readable_size(len(item[1]))}) "
f"than the size of the buffer ({human_readable_size(self._maxsize)})" # type: ignore
)
while self.full(item):
putter = self._get_loop().create_future() # type: ignore
self._putters.append(putter) # type: ignore
try:
await putter
except: # noqa: E722
putter.cancel() # Just in case putter is not done yet.
try:
# Clean self._putters from canceled putters.
self._putters.remove(putter) # type: ignore
except ValueError:
# The putter could be removed from self._putters by a
# previous get_nowait call.
pass
if not self.full(item) and not putter.cancelled():
# We were woken up by get_nowait(), but can't take
# the call. Wake up the next in line.
self._wakeup_next(self._putters) # type: ignore
raise
return self.put_nowait(item)
def put_nowait(self, item: tuple[Event, bytes]):
"""Put an item into the queue without blocking.
If no free slot is immediately available, raise QueueFull.
"""
if self.full(item):
raise asyncio.QueueFull
self._put(item)
self._unfinished_tasks += 1 # type: ignore
self._finished.clear() # type: ignore
self._wakeup_next(self._getters) # type: ignore
async def wait_until(dt):
"""Sleep until the specified datetime."""
now = datetime.now()
await asyncio.sleep((dt - now).total_seconds())
EVENT_TYPES_MAP = {
EventType.MOTION: {"motion"},
EventType.RING: {"ring"},
EventType.SMART_DETECT_LINE: {"line"},
EventType.FINGERPRINT_IDENTIFIED: {"fingerprint"},
EventType.NFC_CARD_SCANNED: {"nfc"},
EventType.SMART_DETECT: {t for t in SmartDetectObjectType.values() if t not in SmartDetectAudioType.values()},
EventType.SMART_AUDIO_DETECT: {f"{t}" for t in SmartDetectAudioType.values()},
}
def wanted_event_type(event, wanted_detection_types: Set[str], cameras: Set[str], ignore_cameras: Set[str]):
"""Return True if this event is one we want."""
if event.start is None or event.end is None:
return False # This event is still on-going
if event.camera_id in ignore_cameras:
return False
if cameras and event.camera_id not in cameras:
return False
if event.type not in EVENT_TYPES_MAP:
return False
if event.type in [EventType.SMART_DETECT, EventType.SMART_AUDIO_DETECT]:
detection_types = set(event.smart_detect_types)
else:
detection_types = EVENT_TYPES_MAP[event.type]
if not detection_types & wanted_detection_types: # No intersection
return False
return True

1723
uv.lock generated Normal file

File diff suppressed because it is too large Load Diff