Compare commits

...

50 Commits

Author SHA1 Message Date
Sebastian Goscik
ca455ebcd0 Bump version: 0.8.6 → 0.8.7 2022-12-11 13:46:52 +00:00
Sebastian Goscik
16315ca23c Fix improper unpacking of upload events 2022-12-11 13:36:40 +00:00
Sebastian Goscik
ac0f6f5fcb Bump version: 0.8.5 → 0.8.6 2022-12-10 06:59:45 +00:00
Sebastian Goscik
0c34294b7e clear current event after upload/download 2022-12-10 06:44:56 +00:00
Sebastian Goscik
f195b8a4a4 Fix ignoring missing event before one has started downloading/uploading 2022-12-10 06:35:38 +00:00
Sebastian Goscik
645e339314 Bump version: 0.8.4 → 0.8.5 2022-12-09 23:20:05 +00:00
Sebastian Goscik
13c5b630d4 fix using event instead of event id in set to exclude missing events 2022-12-09 23:19:38 +00:00
Sebastian Goscik
44867e7427 Bump version: 0.8.3 → 0.8.4 2022-12-09 11:15:07 +00:00
Sebastian Goscik
0978798078 Fix uploading files not being accounted for when checking for missing events 2022-12-09 11:12:08 +00:00
Sebastian Goscik
8e3ea2b13f Log buffer size in human readable format 2022-12-09 11:12:08 +00:00
Sebastian Goscik
8a67311fda show default buffer size in command help 2022-12-08 12:40:32 +00:00
Sebastian Goscik
8aedb35c45 Update readme 2022-12-08 12:40:19 +00:00
Sebastian Goscik
4eed1c01c4 Bump version: 0.8.2 → 0.8.3 2022-12-08 12:22:47 +00:00
Sebastian Goscik
a4091699a1 Fix setting no verbosity for the docker container 2022-12-08 12:12:54 +00:00
Sebastian Goscik
58eb1fd8a7 Added event ID to uploader/downloader logging
Also fixed issue where logging outside of unifi_protect_backup was not adding colors
2022-12-08 12:04:36 +00:00
Sebastian Goscik
bba96e9d86 Make video download buffer size configurable 2022-12-08 00:15:11 +00:00
Sebastian Goscik
dd69a18dbf Raise an error when trying to add a video larger than the buffer 2022-12-08 00:14:08 +00:00
Sebastian Goscik
3510a50d0f remove unused asyncio loop 2022-12-07 23:25:41 +00:00
Sebastian Goscik
3e0044cd80 Make color logging optional
Returns to the previous default mode of plain logging but allows color logging to be enabled
2022-12-06 00:57:05 +00:00
Sebastian Goscik
1b3d196672 Add timezone info to debug log 2022-12-06 00:57:05 +00:00
Sebastian Goscik
c22819c04d Correct missing event logging for smart detections 2022-12-06 00:57:05 +00:00
Sebastian Goscik
ac66f4eaab Reduce log spam from missing events unless using extra_debug 2022-12-06 00:57:05 +00:00
Sebastian Goscik
34bc37bd0b Bump version: 0.8.1 → 0.8.2 2022-12-05 14:27:11 +00:00
Sebastian Goscik
f15cdf9a9b updated changelog 2022-12-05 14:27:06 +00:00
Sebastian Goscik
63d368f14c Added note to readme about 0.8 docker changes 2022-12-05 14:24:27 +00:00
Sebastian Goscik
ee01edf55c Make sure config directories exist in the container 2022-12-05 14:04:43 +00:00
Sebastian Goscik
4e10e0f10e Use run_command in downloader and uploader 2022-12-05 14:03:59 +00:00
Sebastian Goscik
385f115eab Add ability for run_command to pass data to stdin 2022-12-05 14:03:23 +00:00
Sebastian Goscik
b4062d3b53 Fix issue where indented stdout/stderr was being returned
The indentation was supposed to be only for the logging to make it easier to read but was also being returned, thus breaking parsing of the command output

Fixes #60
2022-12-05 13:40:32 +00:00
Sebastian Goscik
7bfcb548e2 Bump version: 0.8.0 → 0.8.1 2022-12-04 12:04:15 +00:00
Sebastian Goscik
a74e4b042d changelog 2022-12-04 12:03:57 +00:00
Sebastian Goscik
2c5308aa20 updated name in pyproject.toml 2022-12-04 12:03:54 +00:00
Sebastian Goscik
9d375d4e7b update bumpversion cfg to use new tar.gz name 2022-12-04 11:59:36 +00:00
Sebastian Goscik
df4390688b Update docs and dockerfile to save events database 2022-12-03 22:40:40 +00:00
Sebastian Goscik
3acfd1f543 Fix dockerfile - to _
I have no idea how this worked before but not now
2022-12-03 22:04:50 +00:00
Sebastian Goscik
49c11c1872 Make ci show all temp files 2022-12-03 22:00:22 +00:00
Sebastian Goscik
93cf297371 Bump version: 0.7.4 → 0.8.0 2022-12-03 21:54:45 +00:00
Sebastian Goscik
8baa413a23 Merge pull request #57 from ep1cman/restructure
Major Restructure
2022-12-03 21:51:20 +00:00
Sebastian Goscik
471ecb0662 Major Restructure
- Each task is now its own class
- Added a database to track backed up events and their destinations
- Added task to check for and backup missed events
2022-12-03 21:48:44 +00:00
Sebastian Goscik
031d4e4862 Update dev.yml
Do not trigger dev pipeline on pull requests
2022-08-24 15:28:48 +01:00
Sebastian Goscik
f109ec2a48 Bump version: 0.7.3 → 0.7.4 2022-08-21 20:51:08 +01:00
Sebastian Goscik
6a8bb39b63 Change rclone config command to use this container instead of a separate rclone container 2022-08-21 20:51:08 +01:00
Sebastian Goscik
49ddb081a8 Added rclone debugging instructions 2022-08-21 20:51:08 +01:00
Sebastian Goscik
941c92142f Fixed rclone.conf path in back to cloud example 2022-08-21 20:51:08 +01:00
Sebastian Goscik
150d8e6f49 Update CI flows to build arm64 containers 2022-08-21 20:51:08 +01:00
Sebastian Goscik
5ae43f08af Bump version: 0.7.2 → 0.7.3 2022-07-31 11:35:25 +01:00
Sebastian Goscik
0a36102eed Fixed dockerfile for pyunifiprotect 4.0.0
As of pyunifiprotect 4.0.0, a rust based library is needed.
In order for this to install correctly, cargo is needed, and alpine
needed to be bumped to 3.16.
2022-07-31 11:24:30 +01:00
Sebastian Goscik
92be1cea5d Bump pyunifiprotect 2022-07-31 01:48:04 +01:00
Sebastian Goscik
1813bc0176 Bump version: 0.7.1 → 0.7.2 2022-07-17 20:04:03 +01:00
Sebastian Goscik
9451fb4235 Bump pyunifiprotect -> v3.9.2 2022-07-16 23:37:36 +01:00
20 changed files with 1466 additions and 764 deletions

View File

@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.8.0
current_version = 0.8.7
commit = True
tag = True
@@ -12,5 +12,5 @@ search = __version__ = '{current_version}'
replace = __version__ = '{new_version}'
[bumpversion:file:Dockerfile]
search = COPY dist/unifi-protect-backup-{current_version}.tar.gz sdist.tar.gz
replace = COPY dist/unifi-protect-backup-{new_version}.tar.gz sdist.tar.gz
search = COPY dist/unifi_protect_backup-{current_version}.tar.gz sdist.tar.gz
replace = COPY dist/unifi_protect_backup-{new_version}.tar.gz sdist.tar.gz

View File

@@ -2,16 +2,11 @@
name: dev workflow
env:
IMAGE_NAME: ${{ github.repository }}
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
# Triggers the workflow on push events but only for the dev branch
push:
branches: [ master, main, dev ]
pull_request:
branches: [ master, main, dev ]
branches: [ dev ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
@@ -56,9 +51,6 @@ jobs:
name: Create dev container
runs-on: ubuntu-20.04
if: github.event_name != 'pull_request'
strategy:
matrix:
python-versions: [3.9]
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
@@ -67,7 +59,7 @@ jobs:
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-versions }}
python-version: 3.9
- name: Install dependencies
run: |
@@ -78,20 +70,23 @@ jobs:
run: >-
poetry build
- name: build container
id: docker_build
run: docker build . --file Dockerfile --tag $IMAGE_NAME --label "runnumber=${GITHUB_RUN_ID}"
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: log in to container registry
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Log in to container registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: push container image
run: |
IMAGE_ID=ghcr.io/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:dev
docker push $IMAGE_ID:dev
- name: Build and push dev
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ghcr.io/${{ github.repository }}:dev

View File

@@ -1,50 +0,0 @@
# This is a basic workflow to help you get started with Actions
name: stage & preview workflow
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master, main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
publish_dev_build:
runs-on: ubuntu-latest
strategy:
matrix:
python-versions: [ 3.9 ]
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-versions }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install poetry tox tox-gh-actions
- name: test with tox
run:
tox
- name: Build wheels and source tarball
run: |
poetry version $(poetry version --short)-dev.$GITHUB_RUN_NUMBER
poetry version --short
poetry build
- name: publish to Test PyPI
uses: pypa/gh-action-pypi-publish@master
with:
user: __token__
password: ${{ secrets.TEST_PYPI_API_TOKEN}}
repository_url: https://test.pypi.org/legacy/
skip_existing: true

View File

@@ -12,9 +12,6 @@ on:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
IMAGE_NAME: ${{ github.repository }}
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "release"
@@ -22,10 +19,6 @@ jobs:
name: Create Release
runs-on: ubuntu-20.04
strategy:
matrix:
python-versions: [3.9]
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: Get version from tag
@@ -46,7 +39,7 @@ jobs:
- uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-versions }}
python-version: 3.9
- name: Install dependencies
run: |
@@ -59,14 +52,28 @@ jobs:
- name: show temporary files
run: >-
ls -l
ls -lR
- name: build container
id: docker_build
run: docker build . --file Dockerfile --tag $IMAGE_NAME --label "runnumber=${GITHUB_RUN_ID}"
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: log in to container registry
run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Log in to container registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push dev
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ghcr.io/${{ github.repository }}:${{ steps.tag_name.outputs.current_version }}, ghcr.io/${{ github.repository }}:latest
- name: create github release
id: create_release
@@ -79,26 +86,6 @@ jobs:
draft: false
prerelease: false
- name: push container image
run: |
IMAGE_ID=ghcr.io/$IMAGE_NAME
# Change all uppercase to lowercase
IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
# Strip git ref prefix from version
VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
# Strip "v" prefix from tag name
[[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
# Use Docker `latest` tag convention
[ "$VERSION" == "master" ] && VERSION=latest
echo IMAGE_ID=$IMAGE_ID
echo VERSION=$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:$VERSION
docker tag $IMAGE_NAME $IMAGE_ID:latest
docker push $IMAGE_ID:$VERSION
docker push $IMAGE_ID:latest
- name: publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:

2
.gitignore vendored
View File

@@ -118,3 +118,5 @@ config/
data/
.envrc
clips/
*.sqlite

View File

@@ -4,12 +4,85 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.8.7]- 2022-12-11
### Fixed
- Fix improper unpacking of upload events
## [0.8.6]- 2022-12-10
### Fixed
- check that current event is not none before trying to get it its ID
- downloader/uploaded clear their current event once its been processed
## [0.8.5] - 2022-12-09
### Fixed
- use event ID of currently up/downloading event, not whole event object when checking missing events
## [0.8.4] - 2022-12-09
### Added
- Logging of remaining upload queue size
### Fixed
- Uploading files were not accounted for when checking for missing events
- Buffer size parameter is logged in human-readable format
## [0.8.3] - 2022-12-08
### Added
- Now logs time zone settings for both the host and NVR
- Color logging is now optional and defaults to disabled (to match previous behavior before v0.8.0)
- Ability to configure download buffer size (bumped default up to 512MiB)
- Event IDs to upload/download logging
### Fixed
- Log spam when lots of events are missing, this will now only occur if the logging level is set to `EXTRA_DEBUG` (-vv)
- corrected logging not showing smart detection types
- The application no longer stalls when a video is downloaded larger than the available buffer size
- Ability to set the least verbose logging for the docker container
## [0.8.2] - 2022-12-05
### Fixed
- Fixed issue where command output was being returned with added indentation intended for logging only
- Fixed issue where some command logging was not indented
- Fixed issue where the tool could crash when run in a container if /config/database didn't exist
## [0.8.1] - 2022-12-04
version 0.8.0 was used by accident previously and PyPI would not accept it so bumping by one patch version
## [0.8.0] - 2022-12-03
Major internal refactoring. Each task is now its own class and asyncio task.
### Added
- A database of backed up events and where they are stored
- A periodic check for missed events
- This will also ensure past events before the tool was used are backed up, up until the retention period
### Fixed
- Pruning is no longer done based on file timestamps, the database is used instead. The tool will no longer delete files it didn't create.
- Pruning now runs much more frequently (every minute) so retention periods of less than a day are now possible.
## [0.7.4] - 2022-08-21
No functional changes in this version. This is just to trigger the release CI.
### Added
- Arm docker container
- rclone debugging instructions when using docker
### Fixed
- Documentation error in rclone config path of docker container.
## [0.7.3] - 2022-07-31
### Fixed
- Updated to the 4.0.0 version of pyunifiprotect
- Added rust to the container, and bumped it to alpine 3.16
## [0.7.2] - 2022-07-17
### Fixed
- Updated to the latest version of pyunifiprotect to fix issues introduced in unifi protect 2.1.1
## [0.7.1] - 2022-06-08
### Fixed
- Updated to the latest version of pyunifiprotect to fix issues introduced in unifi protect 2.0.1
- Updated documentation to include how to set up local user accounts on unifi protect
## [0.7.0] - 2022-03-26
### Added
- Added a the ability to change the way the clip files are structured via a template string.

View File

@@ -2,13 +2,13 @@
# $ poetry build
# $ docker build -t ghcr.io/ep1cman/unifi-protect-backup .
FROM ghcr.io/linuxserver/baseimage-alpine:3.15
FROM ghcr.io/linuxserver/baseimage-alpine:3.16
LABEL maintainer="ep1cman"
WORKDIR /app
COPY dist/unifi-protect-backup-0.8.0.tar.gz sdist.tar.gz
COPY dist/unifi_protect_backup-0.8.7.tar.gz sdist.tar.gz
RUN \
echo "**** install build packages ****" && \
@@ -17,7 +17,8 @@ RUN \
musl-dev \
jpeg-dev \
zlib-dev \
python3-dev && \
python3-dev \
cargo && \
echo "**** install packages ****" && \
apk add --no-cache \
rclone \
@@ -44,8 +45,11 @@ ENV RCLONE_DESTINATION=local:/data
ENV VERBOSITY="v"
ENV TZ=UTC
ENV IGNORE_CAMERAS=""
ENV SQLITE_PATH=/config/database/events.sqlite
COPY docker_root/ /
RUN mkdir -p /config/database /config/rclone
VOLUME [ "/config" ]
VOLUME [ "/data" ]

View File

@@ -23,8 +23,9 @@ retention period.
## Features
- Listens to events in real-time via the Unifi Protect websocket API
- Ensures any previous and/or missed events within the retention period are also backed up
- Supports uploading to a [wide range of storage systems using `rclone`](https://rclone.org/overview/)
- Performs nightly pruning of old clips
- Automatic pruning of old clips
## Requirements
- Python 3.9+
@@ -54,9 +55,6 @@ In order to connect to your unifi protect instance, you will first need to setup
## Usage
:warning: **Potential Data Loss**: Be very careful when setting the `rclone-destination`, at midnight every day it will
delete any files older than `retention`. It is best to give `unifi-protect-backup` its own directory.
```
Usage: unifi-protect-backup [OPTIONS]
@@ -102,30 +100,37 @@ Options:
a_name}/{event.start:%Y-%m-%d}/{event.end:%Y
-%m-%dT%H-%M-%S} {detection_type}.mp4]
-v, --verbose How verbose the logging output should be.
None: Only log info messages created by
`unifi-protect-backup`, and all warnings
-v: Only log info & debug messages
created by `unifi-protect-backup`, and
all warnings
-vv: Log info & debug messages created
by `unifi-protect-backup`, command
output, and all warnings
-vvv Log debug messages created by
`unifi-protect-backup`, command output,
all info messages, and all warnings
-vvvv: Log debug messages created by
`unifi-protect-backup` command output,
all info messages, all warnings, and
websocket data
-vvvvv: Log websocket data, command
output, all debug messages, all info
messages and all warnings [x>=0]
--sqlite_path TEXT Path to the SQLite database to use/create
--color-logging / --plain-logging
Set if you want to use color in logging
output [default: plain-logging]
--download-buffer-size TEXT How big the download buffer should be (you
can use suffixes like "B", "KiB", "MiB",
"GiB") [default: 512MiB]
--help Show this message and exit.
```
@@ -142,6 +147,9 @@ always take priority over environment variables):
- `IGNORE_CAMERAS`
- `DETECTION_TYPES`
- `FILE_STRUCTURE_FORMAT`
- `SQLITE_PATH`
- `DOWNLOAD_BUFFER_SIZE`
- `COLOR_LOGGING`
## File path formatting
@@ -165,6 +173,9 @@ You can optionally format the `event.start`/`event.end` timestamps as per the [`
You can run this tool as a container if you prefer with the following command.
Remember to change the variable to make your setup.
> **Note**
> As of version 0.8.0, the event database needs to be persisted for the tool to function properly
> please see the updated commands below
### Backing up locally
By default, if no rclone config is provided clips will be backed up to `/data`.
@@ -176,6 +187,7 @@ docker run \
-e UFP_ADDRESS='UNIFI_PROTECT_IP' \
-e UFP_SSL_VERIFY='false' \
-v '/path/to/save/clips':'/data' \
-v '/path/to/save/database':/config/database/ \
ghcr.io/ep1cman/unifi-protect-backup
```
@@ -184,12 +196,12 @@ In order to backup to cloud storage you need to provide a `rclone.conf` file.
If you do not already have a `rclone.conf` file you can create one as follows:
```
$ docker run -it --rm -v $PWD:/root/.config/rclone rclone/rclone config
$ docker run -it --rm -v $PWD:/root/.config/rclone --entrypoint rclone ghcr.io/ep1cman/unifi-protect-backup config
```
Follow the interactive configuration proceed, this will create a `rclone.conf`
file in your current directory.
Finally start the container:
Finally, start the container:
```
docker run \
-e UFP_USERNAME='USERNAME' \
@@ -198,10 +210,36 @@ docker run \
-e UFP_SSL_VERIFY='false' \
-e RCLONE_DESTINATION='my_remote:/unifi_protect_backup' \
-v '/path/to/save/clips':'/data' \
-v `/path/to/rclone.conf':'/config/rclone.conf'
-v `/path/to/rclone.conf':'/config/rclone/rclone.conf' \
-v '/path/to/save/database':/config/database/ \
ghcr.io/ep1cman/unifi-protect-backup
```
### Debugging
If you need to debug your rclone setup, you can invoke rclone directly like so:
```
docker run \
--rm \
-v /path/to/rclone.conf:/config/rclone/rclone.conf \
-e RCLONE_CONFIG='/config/rclone/rclone.conf' \
--entrypoint rclone \
ghcr.io/ep1cman/unifi-protect-backup \
{rclone subcommand as per: https://rclone.org/docs/#subcommands}
```
For example to check that your config file is being read properly and list the configured remotes:
```
docker run \
--rm \
-v /path/to/rclone.conf:/config/rclone/rclone.conf \
-e RCLONE_CONFIG='/config/rclone/rclone.conf' \
--entrypoint rclone \
ghcr.io/ep1cman/unifi-protect-backup \
listremotes
```
## Credits
- Heavily utilises [`pyunifiprotect`](https://github.com/briis/pyunifiprotect) by [@briis](https://github.com/briis/)

View File

@@ -2,5 +2,8 @@
export RCLONE_CONFIG=/config/rclone/rclone.conf
echo $VERBOSITY
[[ -n "$VERBOSITY" ]] && export VERBOSITY_ARG=-$VERBOSITY || export VERBOSITY_ARG=""
exec \
s6-setuidgid abc unifi-protect-backup -${VERBOSITY}
s6-setuidgid abc unifi-protect-backup ${VERBOSITY_ARG}

242
poetry.lock generated
View File

@@ -1,18 +1,3 @@
[[package]]
name = "aiocron"
version = "1.8"
description = "Crontabs for asyncio"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
croniter = "*"
tzlocal = "*"
[package.extras]
test = ["coverage"]
[[package]]
name = "aiofiles"
version = "0.8.0"
@@ -41,6 +26,17 @@ yarl = ">=1.0,<2.0"
[package.extras]
speedups = ["aiodns", "brotli", "cchardet"]
[[package]]
name = "aiorun"
version = "2022.11.1"
description = "Boilerplate for asyncio applications"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
dev = ["pytest", "pytest-cov"]
[[package]]
name = "aioshutil"
version = "1.1"
@@ -60,6 +56,17 @@ python-versions = ">=3.6"
[package.dependencies]
frozenlist = ">=1.1.0"
[[package]]
name = "aiosqlite"
version = "0.17.0"
description = "asyncio bridge to the standard sqlite3 module"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
typing_extensions = ">=3.7.2"
[[package]]
name = "appnope"
version = "0.1.3"
@@ -68,6 +75,22 @@ category = "main"
optional = true
python-versions = "*"
[[package]]
name = "astroid"
version = "2.12.13"
description = "An abstract syntax tree for Python with inference support."
category = "main"
optional = false
python-versions = ">=3.7.2"
[package.dependencies]
lazy-object-proxy = ">=1.4.0"
typing-extensions = {version = ">=3.10", markers = "python_version < \"3.10\""}
wrapt = [
{version = ">=1.11,<2", markers = "python_version < \"3.11\""},
{version = ">=1.14,<2", markers = "python_version >= \"3.11\""},
]
[[package]]
name = "asttokens"
version = "2.0.5"
@@ -122,28 +145,24 @@ python-versions = "*"
[[package]]
name = "black"
version = "21.12b0"
version = "22.10.0"
description = "The uncompromising code formatter."
category = "main"
optional = true
python-versions = ">=3.6.2"
python-versions = ">=3.7"
[package.dependencies]
click = ">=7.1.2"
click = ">=8.0.0"
mypy-extensions = ">=0.4.3"
pathspec = ">=0.9.0,<1"
pathspec = ">=0.9.0"
platformdirs = ">=2"
tomli = ">=0.2.6,<2.0.0"
typing-extensions = [
{version = ">=3.10.0.0", markers = "python_version < \"3.10\""},
{version = "!=3.10.0.1", markers = "python_version >= \"3.10\""},
]
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
[package.extras]
colorama = ["colorama (>=0.4.3)"]
d = ["aiohttp (>=3.7.4)"]
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
python2 = ["typed-ast (>=1.4.3)"]
uvloop = ["uvloop (>=0.15.2)"]
[[package]]
@@ -221,11 +240,11 @@ colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "colorama"
version = "0.4.4"
version = "0.4.6"
description = "Cross-platform colored terminal text."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
[[package]]
name = "coverage"
@@ -238,17 +257,6 @@ python-versions = ">=3.7"
[package.extras]
toml = ["tomli"]
[[package]]
name = "croniter"
version = "1.3.5"
description = "croniter provides iteration for datetime object with cron like format"
category = "main"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.dependencies]
python-dateutil = "*"
[[package]]
name = "cryptography"
version = "37.0.2"
@@ -276,6 +284,17 @@ category = "main"
optional = true
python-versions = ">=3.5"
[[package]]
name = "dill"
version = "0.3.6"
description = "serialize all of python"
category = "main"
optional = false
python-versions = ">=3.7"
[package.extras]
graph = ["objgraph (>=1.7.2)"]
[[package]]
name = "distlib"
version = "0.3.4"
@@ -441,7 +460,7 @@ name = "isort"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "main"
optional = true
optional = false
python-versions = ">=3.6.1,<4.0"
[package.extras]
@@ -495,6 +514,14 @@ SecretStorage = {version = ">=3.2", markers = "sys_platform == \"linux\""}
docs = ["sphinx", "jaraco.packaging (>=9)", "rst.linker (>=1.9)", "jaraco.tidelift (>=1.4)"]
testing = ["pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-flake8", "pytest-cov", "pytest-enabler (>=1.0.1)", "pytest-black (>=0.3.7)", "pytest-mypy (>=0.9.1)"]
[[package]]
name = "lazy-object-proxy"
version = "1.8.0"
description = "A fast and thorough lazy object proxy."
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "matplotlib-inline"
version = "0.1.3"
@@ -511,7 +538,7 @@ name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "main"
optional = true
optional = false
python-versions = "*"
[[package]]
@@ -555,6 +582,14 @@ category = "main"
optional = true
python-versions = "*"
[[package]]
name = "orjson"
version = "3.7.10"
description = "Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy"
category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "packaging"
version = "21.3"
@@ -633,7 +668,7 @@ name = "platformdirs"
version = "2.5.2"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "main"
optional = true
optional = false
python-versions = ">=3.7"
[package.extras]
@@ -781,6 +816,29 @@ dev = ["sphinx", "sphinx-rtd-theme", "zope.interface", "cryptography (>=3.3.1)",
docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"]
tests = ["pytest (>=6.0.0,<7.0.0)", "coverage[toml] (==5.0.4)"]
[[package]]
name = "pylint"
version = "2.15.7"
description = "python code static checker"
category = "main"
optional = false
python-versions = ">=3.7.2"
[package.dependencies]
astroid = ">=2.12.13,<=2.14.0-dev0"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
dill = ">=0.2"
isort = ">=4.2.5,<6"
mccabe = ">=0.6,<0.8"
platformdirs = ">=2.2.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
tomlkit = ">=0.10.1"
typing-extensions = {version = ">=3.10.0", markers = "python_version < \"3.10\""}
[package.extras]
spelling = ["pyenchant (>=3.2,<4.0)"]
testutils = ["gitpython (>3)"]
[[package]]
name = "pyparsing"
version = "3.0.9"
@@ -848,20 +906,9 @@ category = "main"
optional = false
python-versions = "*"
[[package]]
name = "pytz-deprecation-shim"
version = "0.1.0.post0"
description = "Shims to make deprecation of pytz easier"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,>=2.7"
[package.dependencies]
tzdata = {version = "*", markers = "python_version >= \"3.6\""}
[[package]]
name = "pyunifiprotect"
version = "3.9.0"
version = "4.0.11"
description = "Unofficial UniFi Protect Python API and CLI"
category = "main"
optional = false
@@ -871,6 +918,7 @@ python-versions = "*"
aiofiles = "*"
aiohttp = "*"
aioshutil = "*"
orjson = "*"
packaging = "*"
pillow = "*"
pydantic = "!=1.9.1"
@@ -1011,7 +1059,15 @@ name = "tomli"
version = "1.2.3"
description = "A lil' TOML parser"
category = "main"
optional = true
optional = false
python-versions = ">=3.6"
[[package]]
name = "tomlkit"
version = "0.11.6"
description = "Style preserving TOML library"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
@@ -1139,30 +1195,6 @@ category = "main"
optional = false
python-versions = ">=3.7"
[[package]]
name = "tzdata"
version = "2022.1"
description = "Provider of IANA time zone data"
category = "main"
optional = false
python-versions = ">=2"
[[package]]
name = "tzlocal"
version = "4.2"
description = "tzinfo object for the local timezone"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytz-deprecation-shim = "*"
tzdata = {version = "*", markers = "platform_system == \"Windows\""}
[package.extras]
devenv = ["black", "pyroma", "pytest-cov", "zest.releaser"]
test = ["pytest-mock (>=3.3)", "pytest (>=4.3)"]
[[package]]
name = "urllib3"
version = "1.26.9"
@@ -1210,6 +1242,14 @@ category = "main"
optional = true
python-versions = "*"
[[package]]
name = "wrapt"
version = "1.14.1"
description = "Module for decorators, wrappers and monkey patching."
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[[package]]
name = "yarl"
version = "1.7.2"
@@ -1241,13 +1281,9 @@ test = ["pytest", "black", "isort", "mypy", "flake8", "flake8-docstrings", "pyte
[metadata]
lock-version = "1.1"
python-versions = ">=3.9.0,<4.0"
content-hash = "7e206e8e7606eef53e11c90568cc20e1fd857fab1fc6bc6782f265ee429a73e9"
content-hash = "c4291adb62da91a97e4d6d5a4aac0be648838b95f96ed93228fd8aacdfce48b0"
[metadata.files]
aiocron = [
{file = "aiocron-1.8-py3-none-any.whl", hash = "sha256:b6313214c311b62aa2220e872b94139b648631b3103d062ef29e5d3230ddce6d"},
{file = "aiocron-1.8.tar.gz", hash = "sha256:48546513faf2eb7901e65a64eba7b653c80106ed00ed9ca3419c3d10b6555a01"},
]
aiofiles = [
{file = "aiofiles-0.8.0-py3-none-any.whl", hash = "sha256:7a973fc22b29e9962d0897805ace5856e6a566ab1f0c8e5c91ff6c866519c937"},
{file = "aiofiles-0.8.0.tar.gz", hash = "sha256:8334f23235248a3b2e83b2c3a78a22674f39969b96397126cc93664d9a901e59"},
@@ -1326,6 +1362,7 @@ aiohttp = [
{file = "aiohttp-3.8.1-cp39-cp39-win_amd64.whl", hash = "sha256:1c182cb873bc91b411e184dab7a2b664d4fea2743df0e4d57402f7f3fa644bac"},
{file = "aiohttp-3.8.1.tar.gz", hash = "sha256:fc5471e1a54de15ef71c1bc6ebe80d4dc681ea600e68bfd1cbce40427f0b7578"},
]
aiorun = []
aioshutil = [
{file = "aioshutil-1.1-py3-none-any.whl", hash = "sha256:4c17e1da55cf928b4a85bd6ff5e4f1560cf21db7a16b5da5844f8f3edf3e2895"},
{file = "aioshutil-1.1.tar.gz", hash = "sha256:d2e8d6baddab13137410b27ce24f39ce9889684cb47503d5af182ea8d038b0f1"},
@@ -1334,10 +1371,12 @@ aiosignal = [
{file = "aiosignal-1.2.0-py3-none-any.whl", hash = "sha256:26e62109036cd181df6e6ad646f91f0dcfd05fe16d0cb924138ff2ab75d64e3a"},
{file = "aiosignal-1.2.0.tar.gz", hash = "sha256:78ed67db6c7b7ced4f98e495e572106d5c432a93e1ddd1bf475e1dc05f5b7df2"},
]
aiosqlite = []
appnope = [
{file = "appnope-0.1.3-py2.py3-none-any.whl", hash = "sha256:265a455292d0bd8a72453494fa24df5a11eb18373a60c7c0430889f22548605e"},
{file = "appnope-0.1.3.tar.gz", hash = "sha256:02bd91c4de869fbb1e1c50aafc4098827a7a54ab2f39d9dcba6c9547ed920e24"},
]
astroid = []
asttokens = [
{file = "asttokens-2.0.5-py2.py3-none-any.whl", hash = "sha256:0844691e88552595a6f4a4281a9f7f79b8dd45ca4ccea82e5e05b4bbdb76705c"},
{file = "asttokens-2.0.5.tar.gz", hash = "sha256:9a54c114f02c7a9480d56550932546a3f1fe71d8a02f1bc7ccd0ee3ee35cf4d5"},
@@ -1358,10 +1397,7 @@ backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
{file = "backcall-0.2.0.tar.gz", hash = "sha256:5cbdbf27be5e7cfadb448baf0aa95508f91f2bbc6c6437cd9cd06e2a4c215e1e"},
]
black = [
{file = "black-21.12b0-py3-none-any.whl", hash = "sha256:a615e69ae185e08fdd73e4715e260e2479c861b5740057fde6e8b4e3b7dd589f"},
{file = "black-21.12b0.tar.gz", hash = "sha256:77b80f693a569e2e527958459634f18df9b0ba2625ba4e0c2d5da5be42e6f2b3"},
]
black = []
bleach = [
{file = "bleach-5.0.0-py3-none-any.whl", hash = "sha256:08a1fe86d253b5c88c92cc3d810fd8048a16d15762e1e5b74d502256e5926aa1"},
{file = "bleach-5.0.0.tar.gz", hash = "sha256:c6d6cc054bdc9c83b48b8083e236e5f00f238428666d2ce2e083eaa5fd568565"},
@@ -1438,10 +1474,7 @@ click = [
{file = "click-8.0.1-py3-none-any.whl", hash = "sha256:fba402a4a47334742d782209a7c79bc448911afe1149d07bdabdf480b3e2f4b6"},
{file = "click-8.0.1.tar.gz", hash = "sha256:8c04c11192119b1ef78ea049e0a6f0463e4c48ef00a30160c704337586f3ad7a"},
]
colorama = [
{file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"},
{file = "colorama-0.4.4.tar.gz", hash = "sha256:5941b2b48a20143d2267e95b1c2a7603ce057ee39fd88e7329b0c292aa16869b"},
]
colorama = []
coverage = [
{file = "coverage-6.4.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:f1d5aa2703e1dab4ae6cf416eb0095304f49d004c39e9db1d86f57924f43006b"},
{file = "coverage-6.4.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4ce1b258493cbf8aec43e9b50d89982346b98e9ffdfaae8ae5793bc112fb0068"},
@@ -1485,10 +1518,6 @@ coverage = [
{file = "coverage-6.4.1-pp36.pp37.pp38-none-any.whl", hash = "sha256:4803e7ccf93230accb928f3a68f00ffa80a88213af98ed338a57ad021ef06815"},
{file = "coverage-6.4.1.tar.gz", hash = "sha256:4321f075095a096e70aff1d002030ee612b65a205a0a0f5b815280d5dc58100c"},
]
croniter = [
{file = "croniter-1.3.5-py2.py3-none-any.whl", hash = "sha256:4f72faca42c00beb6e30907f1315145f43dfbe5ec0ad4ada24b4c0d57b86a33a"},
{file = "croniter-1.3.5.tar.gz", hash = "sha256:7592fc0e8a00d82af98dfa2768b75983b6fb4c2adc8f6d0d7c931a715b7cefee"},
]
cryptography = [
{file = "cryptography-37.0.2-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:ef15c2df7656763b4ff20a9bc4381d8352e6640cfeb95c2972c38ef508e75181"},
{file = "cryptography-37.0.2-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:3c81599befb4d4f3d7648ed3217e00d21a9341a9a688ecdd615ff72ffbed7336"},
@@ -1517,6 +1546,7 @@ decorator = [
{file = "decorator-5.1.1-py3-none-any.whl", hash = "sha256:b8c3f85900b9dc423225913c5aace94729fe1fa9763b38939a95226f02d37186"},
{file = "decorator-5.1.1.tar.gz", hash = "sha256:637996211036b6385ef91435e4fae22989472f9d571faba8927ba8253acbc330"},
]
dill = []
distlib = [
{file = "distlib-0.3.4-py2.py3-none-any.whl", hash = "sha256:6564fe0a8f51e734df6333d08b8b94d4ea8ee6b99b5ed50613f731fd4089f34b"},
{file = "distlib-0.3.4.zip", hash = "sha256:e4b58818180336dc9c529bfb9a0b58728ffc09ad92027a3f30b7cd91e3458579"},
@@ -1641,6 +1671,7 @@ keyring = [
{file = "keyring-23.5.1-py3-none-any.whl", hash = "sha256:9ef58314bcc823f426b49ec787539a2d73571b37de4cd498f839803b01acff1e"},
{file = "keyring-23.5.1.tar.gz", hash = "sha256:dee502cdf18a98211bef428eea11456a33c00718b2f08524fd5727c7f424bffd"},
]
lazy-object-proxy = []
matplotlib-inline = [
{file = "matplotlib-inline-0.1.3.tar.gz", hash = "sha256:a04bfba22e0d1395479f866853ec1ee28eea1485c1d69a6faf00dc3e24ff34ee"},
{file = "matplotlib_inline-0.1.3-py3-none-any.whl", hash = "sha256:aed605ba3b72462d64d475a21a9296f400a19c4f74a31b59103d2a99ffd5aa5c"},
@@ -1743,6 +1774,7 @@ nodeenv = [
{file = "nodeenv-1.6.0-py2.py3-none-any.whl", hash = "sha256:621e6b7076565ddcacd2db0294c0381e01fd28945ab36bcf00f41c5daf63bef7"},
{file = "nodeenv-1.6.0.tar.gz", hash = "sha256:3ef13ff90291ba2a4a7a4ff9a979b63ffdd00a464dbe04acf0ea6471517a4c2b"},
]
orjson = []
packaging = [
{file = "packaging-21.3-py3-none-any.whl", hash = "sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"},
{file = "packaging-21.3.tar.gz", hash = "sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb"},
@@ -1896,6 +1928,7 @@ pyjwt = [
{file = "PyJWT-2.4.0-py3-none-any.whl", hash = "sha256:72d1d253f32dbd4f5c88eaf1fdc62f3a19f676ccbadb9dbc5d07e951b2b26daf"},
{file = "PyJWT-2.4.0.tar.gz", hash = "sha256:d42908208c699b3b973cbeb01a969ba6a96c821eefb1c5bfe4c390c01d67abba"},
]
pylint = []
pyparsing = [
{file = "pyparsing-3.0.9-py3-none-any.whl", hash = "sha256:5026bae9a10eeaefb61dab2f09052b9f4307d44aee4eda64b309723d8d206bbc"},
{file = "pyparsing-3.0.9.tar.gz", hash = "sha256:2b020ecf7d21b687f219b71ecad3631f644a47f01403fa1d1036b0c6416d70fb"},
@@ -1916,14 +1949,7 @@ pytz = [
{file = "pytz-2022.1-py2.py3-none-any.whl", hash = "sha256:e68985985296d9a66a881eb3193b0906246245294a881e7c8afe623866ac6a5c"},
{file = "pytz-2022.1.tar.gz", hash = "sha256:1e760e2fe6a8163bc0b3d9a19c4f84342afa0a2affebfaa84b01b978a02ecaa7"},
]
pytz-deprecation-shim = [
{file = "pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl", hash = "sha256:8314c9692a636c8eb3bda879b9f119e350e93223ae83e70e80c31675a0fdc1a6"},
{file = "pytz_deprecation_shim-0.1.0.post0.tar.gz", hash = "sha256:af097bae1b616dde5c5744441e2ddc69e74dfdcb0c263129610d85b87445a59d"},
]
pyunifiprotect = [
{file = "pyunifiprotect-3.9.0-py3-none-any.whl", hash = "sha256:b5629fe197899d6ddba6e0ff20db548f71c66207720439c489e6b2e1b4b34325"},
{file = "pyunifiprotect-3.9.0.tar.gz", hash = "sha256:a4e7beea33008207adaeb70104c68315ee35e22c32e8fc01b9bc128eef3f453c"},
]
pyunifiprotect = []
pywin32-ctypes = [
{file = "pywin32-ctypes-0.2.0.tar.gz", hash = "sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942"},
{file = "pywin32_ctypes-0.2.0-py2.py3-none-any.whl", hash = "sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98"},
@@ -2003,6 +2029,7 @@ tomli = [
{file = "tomli-1.2.3-py3-none-any.whl", hash = "sha256:e3069e4be3ead9668e21cb9b074cd948f7b3113fd9c8bba083f48247aab8b11c"},
{file = "tomli-1.2.3.tar.gz", hash = "sha256:05b6166bff487dc068d322585c7ea4ef78deed501cc124060e0f238e89a9231f"},
]
tomlkit = []
tox = [
{file = "tox-3.25.0-py2.py3-none-any.whl", hash = "sha256:0805727eb4d6b049de304977dfc9ce315a1938e6619c3ab9f38682bb04662a5a"},
{file = "tox-3.25.0.tar.gz", hash = "sha256:37888f3092aa4e9f835fc8cc6dadbaaa0782651c41ef359e3a5743fcb0308160"},
@@ -2039,14 +2066,6 @@ typing-extensions = [
{file = "typing_extensions-4.2.0-py3-none-any.whl", hash = "sha256:6657594ee297170d19f67d55c05852a874e7eb634f4f753dbd667855e07c1708"},
{file = "typing_extensions-4.2.0.tar.gz", hash = "sha256:f1c24655a0da0d1b67f07e17a5e6b2a105894e6824b92096378bb3668ef02376"},
]
tzdata = [
{file = "tzdata-2022.1-py2.py3-none-any.whl", hash = "sha256:238e70234214138ed7b4e8a0fab0e5e13872edab3be586ab8198c407620e2ab9"},
{file = "tzdata-2022.1.tar.gz", hash = "sha256:8b536a8ec63dc0751342b3984193a3118f8fca2afe25752bb9b7fffd398552d3"},
]
tzlocal = [
{file = "tzlocal-4.2-py3-none-any.whl", hash = "sha256:89885494684c929d9191c57aa27502afc87a579be5cdd3225c77c463ea043745"},
{file = "tzlocal-4.2.tar.gz", hash = "sha256:ee5842fa3a795f023514ac2d801c4a81d1743bbe642e3940143326b3a00addd7"},
]
urllib3 = [
{file = "urllib3-1.26.9-py2.py3-none-any.whl", hash = "sha256:44ece4d53fb1706f667c9bd1c648f5469a2ec925fcf3a776667042d645472c14"},
{file = "urllib3-1.26.9.tar.gz", hash = "sha256:aabaf16477806a5e1dd19aa41f8c2b7950dd3c746362d7e3223dbe6de6ac448e"},
@@ -2063,6 +2082,7 @@ webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
wrapt = []
yarl = [
{file = "yarl-1.7.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f2a8508f7350512434e41065684076f640ecce176d262a7d54f0da41d99c5a95"},
{file = "yarl-1.7.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:da6df107b9ccfe52d3a48165e48d72db0eca3e3029b5b8cb4fe6ee3cb870ba8b"},

View File

@@ -1,7 +1,7 @@
[tool]
[tool.poetry]
name = "unifi-protect-backup"
version = "0.8.0"
name = "unifi_protect_backup"
version = "0.8.7"
homepage = "https://github.com/ep1cman/unifi-protect-backup"
description = "Python tool to backup unifi event clips in realtime."
authors = ["sebastian.goscik <sebastian@goscik.com>"]
@@ -24,7 +24,7 @@ packages = [
python = ">=3.9.0,<4.0"
click = "8.0.1"
black = { version = "^21.5b2", optional = true}
black = { version = "^22.10.0", optional = true}
isort = { version = "^5.8.0", optional = true}
flake8 = { version = "^3.9.2", optional = true}
flake8-docstrings = { version = "^1.6.0", optional = true }
@@ -39,11 +39,14 @@ pre-commit = {version = "^2.12.0", optional = true}
toml = {version = "^0.10.2", optional = true}
bump2version = {version = "^1.0.1", optional = true}
tox-asdf = {version = "^0.1.0", optional = true}
pyunifiprotect = "^3.2.1"
aiocron = "^1.8"
pyunifiprotect = "^4.0.11"
ipdb = {version = "^0.13.9", optional = true}
types-pytz = {version = "^2021.3.5", optional = true}
types-cryptography = {version = "^3.3.18", optional = true}
aiosqlite = "^0.17.0"
python-dateutil = "^2.8.2"
aiorun = "^2022.11.1"
pylint = {version = "^2.15.6", extras = ["dev"]}
[tool.poetry.extras]
test = [

View File

@@ -2,6 +2,11 @@
__author__ = """sebastian.goscik"""
__email__ = 'sebastian@goscik.com'
__version__ = '0.8.0'
__version__ = '0.8.7'
from .unifi_protect_backup import UnifiProtectBackup
# from .unifi_protect_backup import UnifiProtectBackup
from .downloader import VideoDownloader
from .uploader import VideoUploader
from .event_listener import EventListener
from .purge import Purge
from .missing_event_checker import MissingEventChecker

View File

@@ -3,8 +3,11 @@
import asyncio
import click
from aiorun import run
from unifi_protect_backup import UnifiProtectBackup, __version__
from unifi_protect_backup import __version__
from unifi_protect_backup.unifi_protect_backup import UnifiProtectBackup
from unifi_protect_backup.utils import human_readable_to_float
DETECTION_TYPES = ["motion", "person", "vehicle", "ring"]
@@ -102,11 +105,31 @@ all warnings, and websocket data
-vvvvv: Log websocket data, command output, all debug messages, all info messages and all warnings
""",
)
@click.option(
'--sqlite_path',
default='events.sqlite',
envvar='SQLITE_PATH',
help="Path to the SQLite database to use/create",
)
@click.option(
'--color-logging/--plain-logging',
default=False,
show_default=True,
envvar='COLOR_LOGGING',
help="Set if you want to use color in logging output",
)
@click.option(
'--download-buffer-size',
default='512MiB',
show_default=True,
envvar='DOWNLOAD_BUFFER_SIZE',
help='How big the download buffer should be (you can use suffixes like "B", "KiB", "MiB", "GiB")',
callback=lambda ctx, param, value: human_readable_to_float(value),
)
def main(**kwargs):
"""A Python based tool for backing up Unifi Protect event clips as they occur."""
loop = asyncio.get_event_loop()
event_listener = UnifiProtectBackup(**kwargs)
loop.run_until_complete(event_listener.start())
run(event_listener.start())
if __name__ == "__main__":

View File

@@ -0,0 +1,146 @@
import asyncio
import json
import logging
import shutil
from datetime import datetime, timedelta, timezone
import pytz
from aiohttp.client_exceptions import ClientPayloadError
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.nvr import Event
from pyunifiprotect.data.types import EventType
from unifi_protect_backup.utils import (
SubprocessException,
VideoQueue,
get_camera_name,
human_readable_size,
run_command,
setup_event_logger,
)
logger = logging.getLogger(__name__)
setup_event_logger(logger)
async def get_video_length(video: bytes) -> float:
"""Uses ffprobe to get the length of the video file passed in as a byte stream"""
returncode, stdout, stderr = await run_command(
'ffprobe -v quiet -show_streams -select_streams v:0 -of json -', video
)
if returncode != 0:
raise SubprocessException(stdout, stderr, returncode)
json_data = json.loads(stdout)
return float(json_data['streams'][0]['duration'])
class VideoDownloader:
"""Downloads event video clips from Unifi Protect"""
def __init__(self, protect: ProtectApiClient, download_queue: asyncio.Queue, upload_queue: VideoQueue):
self._protect: ProtectApiClient = protect
self.download_queue: asyncio.Queue = download_queue
self.upload_queue: VideoQueue = upload_queue
self.logger = logging.LoggerAdapter(logger, {'event': ''})
self.current_event = None
# Check if `ffprobe` is available
ffprobe = shutil.which('ffprobe')
if ffprobe is not None:
self.logger.debug(f"ffprobe found: {ffprobe}")
self._has_ffprobe = True
else:
self._has_ffprobe = False
async def start(self):
"""Main loop"""
self.logger.info("Starting Downloader")
while True:
try:
event = await self.download_queue.get()
self.current_event = event
self.logger = logging.LoggerAdapter(logger, {'event': f' [{event.id}]'})
# Fix timezones since pyunifiprotect sets all timestamps to UTC. Instead localize them to
# the timezone of the unifi protect NVR.
event.start = event.start.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
event.end = event.end.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
self.logger.info(f"Downloading event: {event.id}")
self.logger.debug(f"Remaining Download Queue: {self.download_queue.qsize()}")
output_queue_current_size = human_readable_size(self.upload_queue.qsize())
output_queue_max_size = human_readable_size(self.upload_queue.maxsize)
self.logger.debug(f"Video Download Buffer: {output_queue_current_size}/{output_queue_max_size}")
self.logger.debug(f" Camera: {await get_camera_name(self._protect, event.camera_id)}")
if event.type == EventType.SMART_DETECT:
self.logger.debug(f" Type: {event.type} ({', '.join(event.smart_detect_types)})")
else:
self.logger.debug(f" Type: {event.type}")
self.logger.debug(f" Start: {event.start.strftime('%Y-%m-%dT%H-%M-%S')} ({event.start.timestamp()})")
self.logger.debug(f" End: {event.end.strftime('%Y-%m-%dT%H-%M-%S')} ({event.end.timestamp()})")
duration = (event.end - event.start).total_seconds()
self.logger.debug(f" Duration: {duration}s")
# Unifi protect does not return full video clips if the clip is requested too soon.
# There are two issues at play here:
# - Protect will only cut a clip on an keyframe which happen every 5s
# - Protect's pipeline needs a finite amount of time to make a clip available
# So we will wait 1.5x the keyframe interval to ensure that there is always ample video
# stored and Protect can return a full clip (which should be at least the length requested,
# but often longer)
time_since_event_ended = datetime.utcnow().replace(tzinfo=timezone.utc) - event.end
sleep_time = (timedelta(seconds=5 * 1.5) - time_since_event_ended).total_seconds()
if sleep_time > 0:
self.logger.debug(f" Sleeping ({sleep_time}s) to ensure clip is ready to download...")
await asyncio.sleep(sleep_time)
video = await self._download(event)
if video is None:
continue
# Get the actual length of the downloaded video using ffprobe
if self._has_ffprobe:
await self._check_video_length(video, duration)
await self.upload_queue.put((event, video))
self.logger.debug("Added to upload queue")
self.current_event = None
except Exception as e:
self.logger.warn(f"Unexpected exception occurred, abandoning event {event.id}:")
self.logger.exception(e)
async def _download(self, event: Event) -> bytes:
"""Downloads the video clip for the given event"""
self.logger.debug(" Downloading video...")
for x in range(5):
try:
video = await self._protect.get_camera_video(event.camera_id, event.start, event.end)
assert isinstance(video, bytes)
break
except (AssertionError, ClientPayloadError, TimeoutError) as e:
self.logger.warn(f" Failed download attempt {x+1}, retying in 1s")
self.logger.exception(e)
await asyncio.sleep(1)
else:
self.logger.warn(f"Download failed after 5 attempts, abandoning event {event.id}:")
return
self.logger.debug(f" Downloaded video size: {human_readable_size(len(video))}s")
return video
async def _check_video_length(self, video, duration):
"""Check if the downloaded event is at least the length of the event, warn otherwise
It is expected for events to regularly be slightly longer than the event specified"""
try:
downloaded_duration = await get_video_length(video)
msg = f" Downloaded video length: {downloaded_duration:.3f}s" f"({downloaded_duration - duration:+.3f}s)"
if downloaded_duration < duration:
self.logger.warning(msg)
else:
self.logger.debug(msg)
except SubprocessException as e:
self.logger.warn(" `ffprobe` failed")

View File

@@ -0,0 +1,120 @@
import logging
from time import sleep
import asyncio
from typing import List
from pyunifiprotect.data.websocket import WSAction, WSSubscriptionMessage
from pyunifiprotect.data.nvr import Event
from pyunifiprotect.data.types import EventType
from pyunifiprotect.api import ProtectApiClient
logger = logging.getLogger(__name__)
class EventListener:
"""Listens to the unifi protect websocket for new events to backup"""
def __init__(
self,
event_queue: asyncio.Queue,
protect: ProtectApiClient,
detection_types: List[str],
ignore_cameras: List[str],
):
self._event_queue: asyncio.Queue = event_queue
self._protect: ProtectApiClient = protect
self._unsub = None
self.detection_types: List[str] = detection_types
self.ignore_cameras: List[str] = ignore_cameras
async def start(self):
"""Main Loop"""
logger.debug("Subscribed to websocket")
self._unsub = self._protect.subscribe_websocket(self._websocket_callback)
while True:
await asyncio.sleep(60)
await self._check_websocket_and_reconnect()
def _websocket_callback(self, msg: WSSubscriptionMessage) -> None:
"""Callback for "EVENT" websocket messages.
Filters the incoming events, and puts completed events onto the download queue
Args:
msg (Event): Incoming event data
"""
logger.websocket_data(msg) # type: ignore
assert isinstance(msg.new_obj, Event)
if msg.action != WSAction.UPDATE:
return
if msg.new_obj.camera_id in self.ignore_cameras:
return
if msg.new_obj.end is None:
return
if msg.new_obj.type not in [EventType.MOTION, EventType.SMART_DETECT, EventType.RING]:
return
if msg.new_obj.type is EventType.MOTION and "motion" not in self.detection_types:
logger.extra_debug(f"Skipping unwanted motion detection event: {msg.new_obj.id}") # type: ignore
return
if msg.new_obj.type is EventType.RING and "ring" not in self.detection_types:
logger.extra_debug(f"Skipping unwanted ring event: {msg.new_obj.id}") # type: ignore
return
elif msg.new_obj.type is EventType.SMART_DETECT:
for event_smart_detection_type in msg.new_obj.smart_detect_types:
if event_smart_detection_type not in self.detection_types:
logger.extra_debug( # type: ignore
f"Skipping unwanted {event_smart_detection_type} detection event: {msg.new_obj.id}"
)
return
# TODO: Will this even work? I think it will block the async loop
while self._event_queue.full():
logger.extra_debug("Event queue full, waiting 1s...")
sleep(1)
self._event_queue.put_nowait(msg.new_obj)
# Unifi protect has started sending the event id in the websocket as a {event_id}-{camera_id} but when the
# API is queried they only have {event_id}. Keeping track of these both of these would be complicated so
# instead we fudge the ID here to match what the API returns
if '-' in msg.new_obj.id:
msg.new_obj.id = msg.new_obj.id.split('-')[0]
logger.debug(f"Adding event {msg.new_obj.id} to queue (Current download queue={self._event_queue.qsize()})")
async def _check_websocket_and_reconnect(self):
"""Checks for websocket disconnect and triggers a reconnect"""
logger.extra_debug("Checking the status of the websocket...")
if self._protect.check_ws():
logger.extra_debug("Websocket is connected.")
else:
logger.warn("Lost connection to Unifi Protect.")
# Unsubscribe, close the session.
self._unsub()
await self._protect.close_session()
while True:
logger.warn("Attempting reconnect...")
try:
# Start the pyunifiprotect connection by calling `update`
await self._protect.close_session()
self._protect._bootstrap = None
await self._protect.update(force=True)
if self._protect.check_ws():
self._unsub = self._protect.subscribe_websocket(self._websocket_callback)
break
else:
logger.warn("Unable to establish connection to Unifi Protect")
except Exception as e:
logger.warn("Unexpected exception occurred while trying to reconnect:")
logger.exception(e)
# Back off for a little while
await asyncio.sleep(10)
logger.info("Re-established connection to Unifi Protect and to the websocket.")

View File

@@ -0,0 +1,118 @@
import asyncio
import logging
from datetime import datetime
from typing import List
import aiosqlite
from dateutil.relativedelta import relativedelta
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.types import EventType
from unifi_protect_backup import VideoDownloader, VideoUploader
logger = logging.getLogger(__name__)
class MissingEventChecker:
"""Periodically checks if any unifi protect events exist within the retention period that are not backed up"""
def __init__(
self,
protect: ProtectApiClient,
db: aiosqlite.Connection,
download_queue: asyncio.Queue,
downloader: VideoDownloader,
uploader: VideoUploader,
retention: relativedelta,
detection_types: List[str],
ignore_cameras: List[str],
interval: int = 60 * 5,
) -> None:
self._protect: ProtectApiClient = protect
self._db: aiosqlite.Connection = db
self._download_queue: asyncio.Queue = download_queue
self._downloader: VideoDownloader = downloader
self._uploader: VideoUploader = uploader
self.retention: relativedelta = retention
self.detection_types: List[str] = detection_types
self.ignore_cameras: List[str] = ignore_cameras
self.interval: int = interval
async def start(self):
"""main loop"""
logger.info("Starting Missing Event Checker")
while True:
try:
logger.extra_debug("Running check for missing events...")
# Get list of events that need to be backed up from unifi protect
unifi_events = await self._protect.get_events(
start=datetime.now() - self.retention,
end=datetime.now(),
types=[EventType.MOTION, EventType.SMART_DETECT, EventType.RING],
)
unifi_events = {event.id: event for event in unifi_events}
# Get list of events that have been backed up from the database
# events(id, type, camera_id, start, end)
async with self._db.execute("SELECT * FROM events") as cursor:
rows = await cursor.fetchall()
db_event_ids = {row[0] for row in rows}
# Prevent re-adding events currently in the download/upload queue
downloading_event_ids = {event.id for event in self._downloader.download_queue._queue}
current_download = self._downloader.current_event
if current_download is not None:
downloading_event_ids.add(current_download.id)
uploading_event_ids = {event.id for event, video in self._uploader.upload_queue._queue}
current_upload = self._uploader.current_event
if current_upload is not None:
uploading_event_ids.add(current_upload.id)
missing_event_ids = set(unifi_events.keys()) - (
db_event_ids | downloading_event_ids | uploading_event_ids
)
logger.debug(f" Total undownloaded events: {len(missing_event_ids)}")
def wanted_event_type(event_id):
event = unifi_events[event_id]
if event.start is None or event.end is None:
return False # This event is still on-going
if event.type is EventType.MOTION and "motion" not in self.detection_types:
return False
if event.type is EventType.RING and "ring" not in self.detection_types:
return False
elif event.type is EventType.SMART_DETECT:
for event_smart_detection_type in event.smart_detect_types:
if event_smart_detection_type not in self.detection_types:
return False
return True
wanted_event_ids = set(filter(wanted_event_type, missing_event_ids))
logger.debug(f" Undownloaded events of wanted types: {len(wanted_event_ids)}")
if len(wanted_event_ids) > 20:
logger.warning(f" Adding {len(wanted_event_ids)} missing events to backup queue")
missing_logger = logger.extra_debug
else:
missing_logger = logger.warning
for event_id in wanted_event_ids:
event = unifi_events[event_id]
if event.type != EventType.SMART_DETECT:
missing_logger(
f" Adding missing event to backup queue: {event.id} ({event.type}) ({event.start.strftime('%Y-%m-%dT%H-%M-%S')} - {event.end.strftime('%Y-%m-%dT%H-%M-%S')})"
)
else:
missing_logger(
f" Adding missing event to backup queue: {event.id} ({', '.join(event.smart_detect_types)}) ({event.start.strftime('%Y-%m-%dT%H-%M-%S')} - {event.end.strftime('%Y-%m-%dT%H-%M-%S')})"
)
await self._download_queue.put(event)
except Exception as e:
logger.warn(f"Unexpected exception occurred during missing event check:")
logger.exception(e)
await asyncio.sleep(self.interval)

View File

@@ -0,0 +1,75 @@
import asyncio
import logging
import time
from datetime import datetime
import aiosqlite
from dateutil.relativedelta import relativedelta
from unifi_protect_backup.utils import parse_rclone_retention, run_command
logger = logging.getLogger(__name__)
async def wait_until(dt):
# sleep until the specified datetime
now = datetime.now()
await asyncio.sleep((dt - now).total_seconds())
async def delete_file(file_path):
returncode, stdout, stderr = await run_command(f'rclone delete -vv "{file_path}"')
if returncode != 0:
logger.warn(f" Failed to delete file: '{file_path}'")
async def tidy_empty_dirs(base_dir_path):
returncode, stdout, stderr = await run_command(f'rclone rmdirs -vv --ignore-errors --leave-root "{base_dir_path}"')
if returncode != 0:
logger.warn(f" Failed to tidy empty dirs")
class Purge:
"""Deletes old files from rclone remotes"""
def __init__(self, db: aiosqlite.Connection, retention: relativedelta, rclone_destination: str, interval: int = 60):
self._db: aiosqlite.Connection = db
self.retention: relativedelta = retention
self.rclone_destination: str = rclone_destination
self.interval: int = interval
async def start(self):
"""Main loop - runs forever"""
while True:
try:
deleted_a_file = False
# For every event older than the retention time
retention_oldest_time = time.mktime((datetime.now() - self.retention).timetuple())
async with self._db.execute(
f"SELECT * FROM events WHERE end < {retention_oldest_time}"
) as event_cursor:
async for event_id, event_type, camera_id, event_start, event_end in event_cursor:
logger.info(f"Purging event: {event_id}.")
# For every backup for this event
async with self._db.execute(f"SELECT * FROM backups WHERE id = '{event_id}'") as backup_cursor:
async for _, remote, file_path in backup_cursor:
logger.debug(f" Deleted: {remote}:{file_path}")
await delete_file(f"{remote}:{file_path}")
deleted_a_file = True
# delete event from database
# entries in the `backups` table are automatically deleted by sqlite triggers
await self._db.execute(f"DELETE FROM events WHERE id = '{event_id}'")
await self._db.commit()
if deleted_a_file:
await tidy_empty_dirs(self.rclone_destination)
except Exception as e:
logger.warn(f"Unexpected exception occurred during purge:")
logger.exception(e)
await asyncio.sleep(self.interval)

View File

@@ -1,166 +1,42 @@
"""Main module."""
import asyncio
import json
from datetime import datetime, timezone
import logging
import pathlib
import re
import os
import shutil
from asyncio.exceptions import TimeoutError
from datetime import datetime, timedelta, timezone
from typing import Callable, List, Optional
from cmath import log
from pprint import pprint
from time import sleep
from typing import Callable, List
import aiocron
import pytz
from aiohttp.client_exceptions import ClientPayloadError
from pyunifiprotect import NvrError, ProtectApiClient
from pyunifiprotect.data.nvr import Event
from pyunifiprotect.data.types import EventType, ModelType
from pyunifiprotect.data.websocket import WSAction, WSSubscriptionMessage
import aiosqlite
from pyunifiprotect import ProtectApiClient
from pyunifiprotect.data.types import ModelType
from unifi_protect_backup import EventListener, MissingEventChecker, Purge, VideoDownloader, VideoUploader
from unifi_protect_backup.utils import (
SubprocessException,
parse_rclone_retention,
run_command,
setup_logging,
human_readable_size,
VideoQueue,
)
logger = logging.getLogger(__name__)
class SubprocessException(Exception):
"""Exception class for when rclone does not exit with `0`."""
def __init__(self, stdout, stderr, returncode):
"""Exception class for when rclone does not exit with `0`.
Args:
stdout (str): What rclone output to stdout
stderr (str): What rclone output to stderr
returncode (str): The return code of the rclone process
"""
super().__init__()
self.stdout: str = stdout
self.stderr: str = stderr
self.returncode: int = returncode
def __str__(self):
"""Turns excpetion into a human readable form."""
return f"Return Code: {self.returncode}\nStdout:\n{self.stdout}\nStderr:\n{self.stderr}"
# TODO: https://github.com/cjrh/aiorun#id6 (smart shield)
def add_logging_level(levelName: str, levelNum: int, methodName: Optional[str] = None) -> None:
"""Comprehensively adds a new logging level to the `logging` module and the currently configured logging class.
`levelName` becomes an attribute of the `logging` module with the value
`levelNum`. `methodName` becomes a convenience method for both `logging`
itself and the class returned by `logging.getLoggerClass()` (usually just
`logging.Logger`).
To avoid accidental clobbering of existing attributes, this method will
raise an `AttributeError` if the level name is already an attribute of the
`logging` module or if the method name is already present
Credit: https://stackoverflow.com/a/35804945
Args:
levelName (str): The name of the new logging level (in all caps).
levelNum (int): The priority value of the logging level, lower=more verbose.
methodName (str): The name of the method used to log using this.
If `methodName` is not specified, `levelName.lower()` is used.
Example:
::
>>> add_logging_level('TRACE', logging.DEBUG - 5)
>>> logging.getLogger(__name__).setLevel("TRACE")
>>> logging.getLogger(__name__).trace('that worked')
>>> logging.trace('so did this')
>>> logging.TRACE
5
"""
if not methodName:
methodName = levelName.lower()
if hasattr(logging, levelName):
raise AttributeError('{} already defined in logging module'.format(levelName))
if hasattr(logging, methodName):
raise AttributeError('{} already defined in logging module'.format(methodName))
if hasattr(logging.getLoggerClass(), methodName):
raise AttributeError('{} already defined in logger class'.format(methodName))
# This method was inspired by the answers to Stack Overflow post
# http://stackoverflow.com/q/2183233/2988730, especially
# http://stackoverflow.com/a/13638084/2988730
def logForLevel(self, message, *args, **kwargs):
if self.isEnabledFor(levelNum):
self._log(levelNum, message, args, **kwargs)
def logToRoot(message, *args, **kwargs):
logging.log(levelNum, message, *args, **kwargs)
logging.addLevelName(levelNum, levelName)
setattr(logging, levelName, levelNum)
setattr(logging.getLoggerClass(), methodName, logForLevel)
setattr(logging, methodName, logToRoot)
def setup_logging(verbosity: int) -> None:
"""Configures loggers to provided the desired level of verbosity.
Verbosity 0: Only log info messages created by `unifi-protect-backup`, and all warnings
verbosity 1: Only log info & debug messages created by `unifi-protect-backup`, and all warnings
verbosity 2: Log info & debug messages created by `unifi-protect-backup`, command output, and
all warnings
Verbosity 3: Log debug messages created by `unifi-protect-backup`, command output, all info
messages, and all warnings
Verbosity 4: Log debug messages created by `unifi-protect-backup` command output, all info
messages, all warnings, and websocket data
Verbosity 5: Log websocket data, command output, all debug messages, all info messages and all
warnings
Args:
verbosity (int): The desired level of verbosity
"""
add_logging_level(
'EXTRA_DEBUG',
logging.DEBUG - 1,
async def create_database(path: str):
"""Creates sqlite database and creates the events abd backups tables"""
db = await aiosqlite.connect(path)
await db.execute("CREATE TABLE events(id PRIMARY KEY, type, camera_id, start REAL, end REAL)")
await db.execute(
"CREATE TABLE backups(id REFERENCES events(id) ON DELETE CASCADE, remote, path, PRIMARY KEY (id, remote))"
)
add_logging_level(
'WEBSOCKET_DATA',
logging.DEBUG - 2,
)
format = "{asctime} [{levelname}]:{name: <20}:\t{message}"
date_format = "%Y-%m-%d %H:%M:%S"
style = '{'
if verbosity == 0:
logging.basicConfig(level=logging.WARN, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.INFO)
elif verbosity == 1:
logging.basicConfig(level=logging.WARN, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.DEBUG)
elif verbosity == 2:
logging.basicConfig(level=logging.WARN, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 3:
logging.basicConfig(level=logging.INFO, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 4:
logging.basicConfig(level=logging.INFO, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
elif verbosity == 5:
logging.basicConfig(level=logging.DEBUG, format=format, style=style, datefmt=date_format)
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
def human_readable_size(num):
"""Turns a number into a human readable number with ISO/IEC 80000 binary prefixes.
Based on: https://stackoverflow.com/a/1094933
Args:
num (int): The number to be converted into human readable format
"""
for unit in ["B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"]:
if abs(num) < 1024.0:
return f"{num:3.1f}{unit}"
num /= 1024.0
raise ValueError("`num` too large, ran out of prefixes")
await db.commit()
return db
class UnifiProtectBackup:
@@ -168,19 +44,6 @@ class UnifiProtectBackup:
Listens to the Unifi Protect websocket for events. When a completed motion or smart detection
event is detected, it will download the clip and back it up using rclone
Attributes:
retention (str): How long should event clips be backed up for. Format as per the
`--max-age` argument of `rclone`
(https://rclone.org/filtering/#max-age-don-t-transfer-any-file-older-than-this)
rclone_args (str): Extra args passed directly to `rclone rcat`.
ignore_cameras (List[str]): List of camera IDs for which to not backup events
verbose (int): How verbose to setup logging, see :func:`setup_logging` for details.
detection_types (List[str]): List of which detection types to backup.
file_structure_format (str): A Python format string for output file path
_download_queue (asyncio.Queue): Queue of events that need to be backed up
_unsub (Callable): Unsubscribe from the websocket callback
_has_ffprobe (bool): If ffprobe was found on the host
"""
def __init__(
@@ -196,6 +59,9 @@ class UnifiProtectBackup:
ignore_cameras: List[str],
file_structure_format: str,
verbose: int,
download_buffer_size: int,
sqlite_path: str = "events.sqlite",
color_logging=False,
port: int = 443,
):
"""Will configure logging settings and the Unifi Protect API (but not actually connect).
@@ -218,8 +84,9 @@ class UnifiProtectBackup:
ignore_cameras (List[str]): List of camera IDs for which to not backup events.
file_structure_format (str): A Python format string for output file path.
verbose (int): How verbose to setup logging, see :func:`setup_logging` for details.
sqlite_path (str): Path where to find/create sqlite database
"""
setup_logging(verbose)
setup_logging(verbose, color_logging)
logger.debug("Config:")
logger.debug(f" {address=}")
@@ -238,9 +105,11 @@ class UnifiProtectBackup:
logger.debug(f" {verbose=}")
logger.debug(f" {detection_types=}")
logger.debug(f" {file_structure_format=}")
logger.debug(f" {sqlite_path=}")
logger.debug(f" download_buffer_size={human_readable_size(download_buffer_size)}")
self.rclone_destination = rclone_destination
self.retention = retention
self.retention = parse_rclone_retention(retention)
self.rclone_args = rclone_args
self.file_structure_format = file_structure_format
@@ -262,8 +131,10 @@ class UnifiProtectBackup:
self._download_queue: asyncio.Queue = asyncio.Queue()
self._unsub: Callable[[], None]
self.detection_types = detection_types
self._has_ffprobe = False
self._sqlite_path = sqlite_path
self._db = None
self._download_buffer_size = download_buffer_size
async def start(self):
"""Bootstrap the backup process and kick off the main loop.
@@ -271,114 +142,91 @@ class UnifiProtectBackup:
You should run this to start the realtime backup of Unifi Protect clips as they are created
"""
logger.info("Starting...")
try:
logger.info("Starting...")
# Ensure `rclone` is installed and properly configured
logger.info("Checking rclone configuration...")
await self._check_rclone()
# Ensure `rclone` is installed and properly configured
logger.info("Checking rclone configuration...")
await self._check_rclone()
# Check if `ffprobe` is available
ffprobe = shutil.which('ffprobe')
if ffprobe is not None:
logger.debug(f"ffprobe found: {ffprobe}")
self._has_ffprobe = True
# Start the pyunifiprotect connection by calling `update`
logger.info("Connecting to Unifi Protect...")
await self._protect.update()
# Start the pyunifiprotect connection by calling `update`
logger.info("Connecting to Unifi Protect...")
await self._protect.update()
# Get a mapping of camera ids -> names
logger.info("Found cameras:")
for camera in self._protect.bootstrap.cameras.values():
logger.info(f" - {camera.id}: {camera.name}")
# Get a mapping of camera ids -> names
logger.info("Found cameras:")
for camera in self._protect.bootstrap.cameras.values():
logger.info(f" - {camera.id}: {camera.name}")
# Print timezone info for debugging
logger.debug(f'NVR TZ: {self._protect.bootstrap.nvr.timezone}')
logger.debug(f'Local TZ: {datetime.now(timezone.utc).astimezone().tzinfo}')
# Subscribe to the websocket
self._unsub = self._protect.subscribe_websocket(self._websocket_callback)
tasks = []
# Set up a "purge" task to run at midnight each day to delete old recordings and empty directories
logger.info("Setting up purge task...")
if not os.path.exists(self._sqlite_path):
logger.info("Database doesn't exist, creating a new one")
self._db = await create_database(self._sqlite_path)
else:
self._db = await aiosqlite.connect(self._sqlite_path)
@aiocron.crontab("0 0 * * *")
async def rclone_purge_old():
logger.info("Deleting old files...")
cmd = f'rclone delete -vv --min-age {self.retention} "{self.rclone_destination}"'
cmd += f' && rclone rmdirs -vv --leave-root "{self.rclone_destination}"'
proc = await asyncio.create_subprocess_shell(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
download_queue = asyncio.Queue()
upload_queue = VideoQueue(self._download_buffer_size)
# Enable foreign keys in the database
await self._db.execute("PRAGMA foreign_keys = ON;")
# Create downloader task
# This will download video files to its buffer
downloader = VideoDownloader(self._protect, download_queue, upload_queue)
tasks.append(asyncio.create_task(downloader.start()))
# Create upload task
# This will upload the videos in the downloader's buffer to the rclone remotes and log it in the database
uploader = VideoUploader(
self._protect,
upload_queue,
self.rclone_destination,
self.rclone_args,
self.file_structure_format,
self._db,
)
stdout, stderr = await proc.communicate()
if proc.returncode == 0:
logger.extra_debug(f"stdout:\n{stdout.decode()}")
logger.extra_debug(f"stderr:\n{stderr.decode()}")
logger.info("Successfully deleted old files")
else:
logger.warn("Failed to purge old files")
logger.warn(f"stdout:\n{stdout.decode()}")
logger.warn(f"stderr:\n{stderr.decode()}")
tasks.append(asyncio.create_task(uploader.start()))
# We need to catch websocket disconnect and trigger a reconnect.
@aiocron.crontab("* * * * *")
async def check_websocket_and_reconnect():
logger.extra_debug("Checking the status of the websocket...")
if self._protect.check_ws():
logger.extra_debug("Websocket is connected.")
else:
logger.warn("Lost connection to Unifi Protect.")
# Create event listener task
# This will connect to the unifi protect websocket and listen for events. When one is detected it will
# be added to the queue of events to download
event_listener = EventListener(download_queue, self._protect, self.detection_types, self.ignore_cameras)
tasks.append(asyncio.create_task(event_listener.start()))
# Unsubscribe, close the session.
self._unsub()
# Create purge task
# This will, every midnight, purge old backups from the rclone remotes and database
purge = Purge(self._db, self.retention, self.rclone_destination)
tasks.append(asyncio.create_task(purge.start()))
# Create missing event task
# This will check all the events within the retention period, if any have been missed and not backed up
# they will be added to the event queue
missing = MissingEventChecker(
self._protect,
self._db,
download_queue,
downloader,
uploader,
self.retention,
self.detection_types,
self.ignore_cameras,
)
tasks.append(asyncio.create_task(missing.start()))
logger.info("Starting...")
await asyncio.gather(*tasks)
except asyncio.CancelledError:
if self._protect is not None:
await self._protect.close_session()
while True:
logger.warn("Attempting reconnect...")
try:
# Start again from scratch. In principle if Unifi
# Protect has not been restarted we should just be able
# to call self._protect.update() to reconnect to the
# websocket. However, if the server has been restarted a
# call to self._protect.check_ws() returns true and some
# seconds later pyunifiprotect detects the websocket as
# disconnected again. Therefore, kill it all and try
# again!
replacement_protect = ProtectApiClient(
self.address,
self.port,
self.username,
self.password,
verify_ssl=self.verify_ssl,
subscribed_models={ModelType.EVENT},
)
# Start the pyunifiprotect connection by calling `update`
await replacement_protect.update()
if replacement_protect.check_ws():
self._protect = replacement_protect
self._unsub = self._protect.subscribe_websocket(self._websocket_callback)
break
else:
logger.warn("Unable to establish connection to Unifi Protect")
except Exception as e:
logger.warn("Unexpected exception occurred while trying to reconnect:")
logger.exception(e)
finally:
# If we get here we need to close the replacement session again
await replacement_protect.close_session()
# Back off for a little while
await asyncio.sleep(10)
logger.info("Re-established connection to Unifi Protect and to the websocket.")
# Launches the main loop
logger.info("Listening for events...")
await self._backup_events()
logger.info("Stopping...")
# Unsubscribes from the websocket
self._unsub()
if self._db is not None:
await self._db.close()
async def _check_rclone(self) -> None:
"""Check if rclone is installed and the specified remote is configured.
@@ -393,258 +241,17 @@ class UnifiProtectBackup:
raise RuntimeError("`rclone` is not installed on this system")
logger.debug(f"rclone found: {rclone}")
cmd = "rclone listremotes -vv"
proc = await asyncio.create_subprocess_shell(
cmd,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate()
logger.extra_debug(f"stdout:\n{stdout.decode()}") # type: ignore
logger.extra_debug(f"stderr:\n{stderr.decode()}") # type: ignore
if proc.returncode != 0:
raise SubprocessException(stdout.decode(), stderr.decode(), proc.returncode)
returncode, stdout, stderr = await run_command("rclone listremotes -vv")
if returncode != 0:
raise SubprocessException(stdout, stderr, returncode)
# Check if the destination is for a configured remote
for line in stdout.splitlines():
if self.rclone_destination.startswith(line.decode()):
if self.rclone_destination.startswith(line):
break
else:
remote = self.rclone_destination.split(":")[0]
raise ValueError(f"rclone does not have a remote called `{remote}`")
def _websocket_callback(self, msg: WSSubscriptionMessage) -> None:
"""Callback for "EVENT" websocket messages.
Filters the incoming events, and puts completed events onto the download queue
Args:
msg (Event): Incoming event data
"""
logger.websocket_data(msg) # type: ignore
# We are only interested in updates that end motion/smartdetection event
assert isinstance(msg.new_obj, Event)
if msg.action != WSAction.UPDATE:
return
if msg.new_obj.camera_id in self.ignore_cameras:
return
if msg.new_obj.end is None:
return
if msg.new_obj.type not in [EventType.MOTION, EventType.SMART_DETECT, EventType.RING]:
return
if msg.new_obj.type is EventType.MOTION and "motion" not in self.detection_types:
logger.extra_debug(f"Skipping unwanted motion detection event: {msg.new_obj.id}") # type: ignore
return
if msg.new_obj.type is EventType.RING and "ring" not in self.detection_types:
logger.extra_debug(f"Skipping unwanted ring event: {msg.new_obj.id}") # type: ignore
return
elif msg.new_obj.type is EventType.SMART_DETECT:
for event_smart_detection_type in msg.new_obj.smart_detect_types:
if event_smart_detection_type not in self.detection_types:
logger.extra_debug( # type: ignore
f"Skipping unwanted {event_smart_detection_type} detection event: {msg.new_obj.id}"
)
return
self._download_queue.put_nowait(msg.new_obj)
logger.debug(f"Adding event {msg.new_obj.id} to queue (Current queue={self._download_queue.qsize()})")
async def _backup_events(self) -> None:
"""Main loop for backing up events.
Waits for an event in the queue, then downloads the corresponding clip and uploads it using rclone.
If errors occur it will simply log the errors and wait for the next event. In a future release,
retries will be added.
"""
while True:
try:
event = await self._download_queue.get()
# Fix timezones since pyunifiprotect sets all timestamps to UTC. Instead localize them to
# the timezone of the unifi protect NVR.
event.start = event.start.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
event.end = event.end.replace(tzinfo=pytz.utc).astimezone(self._protect.bootstrap.nvr.timezone)
logger.info(f"Backing up event: {event.id}")
logger.debug(f"Remaining Queue: {self._download_queue.qsize()}")
logger.debug(f" Camera: {await self._get_camera_name(event.camera_id)}")
if event.type == EventType.SMART_DETECT:
logger.debug(f" Type: {event.type} ({', '.join(event.smart_detect_types)})")
else:
logger.debug(f" Type: {event.type}")
logger.debug(f" Start: {event.start.strftime('%Y-%m-%dT%H-%M-%S')} ({event.start.timestamp()})")
logger.debug(f" End: {event.end.strftime('%Y-%m-%dT%H-%M-%S')} ({event.end.timestamp()})")
duration = (event.end - event.start).total_seconds()
logger.debug(f" Duration: {duration}")
# Unifi protect does not return full video clips if the clip is requested too soon.
# There are two issues at play here:
# - Protect will only cut a clip on an keyframe which happen every 5s
# - Protect's pipeline needs a finite amount of time to make a clip available
# So we will wait 1.5x the keyframe interval to ensure that there is always ample video
# stored and Protect can return a full clip (which should be at least the length requested,
# but often longer)
time_since_event_ended = datetime.utcnow().replace(tzinfo=timezone.utc) - event.end
sleep_time = (timedelta(seconds=5 * 1.5) - time_since_event_ended).total_seconds()
if sleep_time > 0:
logger.debug(f" Sleeping ({sleep_time}s) to ensure clip is ready to download...")
await asyncio.sleep(sleep_time)
# Download video
logger.debug(" Downloading video...")
for x in range(5):
try:
video = await self._protect.get_camera_video(event.camera_id, event.start, event.end)
assert isinstance(video, bytes)
break
except (AssertionError, ClientPayloadError, TimeoutError) as e:
logger.warn(f" Failed download attempt {x+1}, retying in 1s")
logger.exception(e)
await asyncio.sleep(1)
else:
logger.warn(f"Download failed after 5 attempts, abandoning event {event.id}:")
continue
destination = await self.generate_file_path(event)
# Get the actual length of the downloaded video using ffprobe
if self._has_ffprobe:
try:
downloaded_duration = await self._get_video_length(video)
msg = (
f" Downloaded video length: {downloaded_duration:.3f}s"
f"({downloaded_duration - duration:+.3f}s)"
)
if downloaded_duration < duration:
logger.warning(msg)
else:
logger.debug(msg)
except SubprocessException as e:
logger.warn(" `ffprobe` failed")
logger.exception(e)
# Upload video
logger.debug(" Uploading video via rclone...")
logger.debug(f" To: {destination}")
logger.debug(f" Size: {human_readable_size(len(video))}")
for x in range(5):
try:
await self._upload_video(video, destination, self.rclone_args)
break
except SubprocessException as e:
logger.warn(f" Failed upload attempt {x+1}, retying in 1s")
logger.exception(e)
await asyncio.sleep(1)
else:
logger.warn(f"Upload failed after 5 attempts, abandoning event {event.id}:")
continue
logger.info("Backed up successfully!")
except Exception as e:
logger.warn(f"Unexpected exception occurred, abandoning event {event.id}:")
logger.exception(e)
async def _upload_video(self, video: bytes, destination: pathlib.Path, rclone_args: str):
"""Upload video using rclone.
In order to avoid writing to disk, the video file data is piped directly
to the rclone process and uploaded using the `rcat` function of rclone.
Args:
video (bytes): The data to be written to the file
destination (pathlib.Path): Where rclone should write the file
rclone_args (str): Optional extra arguments to pass to `rclone`
Raises:
RuntimeError: If rclone returns a non-zero exit code
"""
cmd = f'rclone rcat -vv {rclone_args} "{destination}"'
proc = await asyncio.create_subprocess_shell(
cmd,
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate(video)
if proc.returncode == 0:
logger.extra_debug(f"stdout:\n{stdout.decode()}") # type: ignore
logger.extra_debug(f"stderr:\n{stderr.decode()}") # type: ignore
else:
raise SubprocessException(stdout.decode(), stderr.decode(), proc.returncode)
async def _get_video_length(self, video: bytes) -> float:
cmd = 'ffprobe -v quiet -show_streams -select_streams v:0 -of json -'
proc = await asyncio.create_subprocess_shell(
cmd,
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate(video)
if proc.returncode == 0:
logger.extra_debug(f"stdout:\n{stdout.decode()}") # type: ignore
logger.extra_debug(f"stderr:\n{stderr.decode()}") # type: ignore
json_data = json.loads(stdout.decode())
return float(json_data['streams'][0]['duration'])
else:
raise SubprocessException(stdout.decode(), stderr.decode(), proc.returncode)
async def generate_file_path(self, event: Event) -> pathlib.Path:
"""Generates the rclone destination path for the provided event.
Generates rclone destination path for the given even based upon the format string
in `self.file_structure_format`.
Provides the following fields to the format string:
event: The `Event` object as per
https://github.com/briis/pyunifiprotect/blob/master/pyunifiprotect/data/nvr.py
duration_seconds: The duration of the event in seconds
detection_type: A nicely formatted list of the event detection type and the smart detection types (if any)
camera_name: The name of the camera that generated this event
Args:
event: The event for which to create an output path
Returns:
pathlib.Path: The rclone path the event should be backed up to
"""
assert isinstance(event.camera_id, str)
assert isinstance(event.start, datetime)
assert isinstance(event.end, datetime)
format_context = {
"event": event,
"duration_seconds": (event.end - event.start).total_seconds(),
"detection_type": f"{event.type} ({' '.join(event.smart_detect_types)})"
if event.smart_detect_types
else f"{event.type}",
"camera_name": await self._get_camera_name(event.camera_id),
}
file_path = self.file_structure_format.format(**format_context)
file_path = re.sub(r'[^\w\-_\.\(\)/ ]', '', file_path) # Sanitize any invalid chars
return pathlib.Path(f"{self.rclone_destination}/{file_path}")
async def _get_camera_name(self, id: str):
try:
return self._protect.bootstrap.cameras[id].name
except KeyError:
# Refresh cameras
logger.debug(f"Unknown camera id: '{id}', checking API")
try:
await self._protect.update(force=True)
except NvrError:
logger.debug(f"Unknown camera id: '{id}'")
raise
name = self._protect.bootstrap.cameras[id].name
logger.debug(f"Found camera - {id}: {name}")
return name
# Ensure the base directory exists
await run_command(f"rclone mkdir -vv {self.rclone_destination}")

View File

@@ -0,0 +1,146 @@
import asyncio
import logging
import pathlib
import re
from datetime import datetime
import aiosqlite
from pyunifiprotect.data.nvr import Event
from pyunifiprotect import ProtectApiClient
from unifi_protect_backup.utils import get_camera_name, VideoQueue, run_command, setup_event_logger, human_readable_size
logger = logging.getLogger(__name__)
setup_event_logger(logger)
class VideoUploader:
"""Uploads videos from the video_queue to the provided rclone destination
Keeps a log of what its uploaded in `db`
"""
def __init__(
self,
protect: ProtectApiClient,
upload_queue: VideoQueue,
rclone_destination: str,
rclone_args: str,
file_structure_format: str,
db: aiosqlite.Connection,
):
self._protect: ProtectApiClient = protect
self.upload_queue: VideoQueue = upload_queue
self._rclone_destination: str = rclone_destination
self._rclone_args: str = rclone_args
self._file_structure_format: str = file_structure_format
self._db: aiosqlite.Connection = db
self.logger = logging.LoggerAdapter(logger, {'event': ''})
self.current_event = None
async def start(self):
"""Main loop
Runs forever looking for video data in the video queue and then uploads it using rclone, finally it updates the database
"""
self.logger.info("Starting Uploader")
while True:
try:
event, video = await self.upload_queue.get()
self.current_event = event
self.logger = logging.LoggerAdapter(logger, {'event': f' [{event.id}]'})
self.logger.info(f"Uploading event: {event.id}")
self.logger.debug(
f" Remaining Upload Queue: {self.upload_queue.qsize_files()} ({human_readable_size(self.upload_queue.qsize())})"
)
destination = await self._generate_file_path(event)
self.logger.debug(f" Destination: {destination}")
await self._upload_video(video, destination, self._rclone_args)
await self._update_database(event, destination)
self.logger.debug(f"Uploaded")
self.current_event = None
except Exception as e:
self.logger.warn(f"Unexpected exception occurred, abandoning event {event.id}:")
self.logger.exception(e)
async def _upload_video(self, video: bytes, destination: pathlib.Path, rclone_args: str):
"""Upload video using rclone.
In order to avoid writing to disk, the video file data is piped directly
to the rclone process and uploaded using the `rcat` function of rclone.
Args:
video (bytes): The data to be written to the file
destination (pathlib.Path): Where rclone should write the file
rclone_args (str): Optional extra arguments to pass to `rclone`
Raises:
RuntimeError: If rclone returns a non-zero exit code
"""
returncode, stdout, stderr = await run_command(f'rclone rcat -vv {rclone_args} "{destination}"', video)
if returncode != 0:
self.logger.warn(f" Failed to upload file: '{destination}'")
async def _update_database(self, event: Event, destination: str):
"""
Add the backed up event to the database along with where it was backed up to
"""
await self._db.execute(
f"""INSERT INTO events VALUES
('{event.id}', '{event.type}', '{event.camera_id}', '{event.start.timestamp()}', '{event.end.timestamp()}')
"""
)
remote, file_path = str(destination).split(":")
await self._db.execute(
f"""INSERT INTO backups VALUES
('{event.id}', '{remote}', '{file_path}')
"""
)
await self._db.commit()
async def _generate_file_path(self, event: Event) -> pathlib.Path:
"""Generates the rclone destination path for the provided event.
Generates rclone destination path for the given even based upon the format string
in `self.file_structure_format`.
Provides the following fields to the format string:
event: The `Event` object as per
https://github.com/briis/pyunifiprotect/blob/master/pyunifiprotect/data/nvr.py
duration_seconds: The duration of the event in seconds
detection_type: A nicely formatted list of the event detection type and the smart detection types (if any)
camera_name: The name of the camera that generated this event
Args:
event: The event for which to create an output path
Returns:
pathlib.Path: The rclone path the event should be backed up to
"""
assert isinstance(event.camera_id, str)
assert isinstance(event.start, datetime)
assert isinstance(event.end, datetime)
format_context = {
"event": event,
"duration_seconds": (event.end - event.start).total_seconds(),
"detection_type": f"{event.type} ({' '.join(event.smart_detect_types)})"
if event.smart_detect_types
else f"{event.type}",
"camera_name": await get_camera_name(self._protect, event.camera_id),
}
file_path = self._file_structure_format.format(**format_context)
file_path = re.sub(r'[^\w\-_\.\(\)/ ]', '', file_path) # Sanitize any invalid chars
return pathlib.Path(f"{self._rclone_destination}/{file_path}")

View File

@@ -0,0 +1,387 @@
import logging
import re
import asyncio
from typing import Optional
from dateutil.relativedelta import relativedelta
from pyunifiprotect import ProtectApiClient
logger = logging.getLogger(__name__)
def add_logging_level(levelName: str, levelNum: int, methodName: Optional[str] = None) -> None:
"""Comprehensively adds a new logging level to the `logging` module and the currently configured logging class.
`levelName` becomes an attribute of the `logging` module with the value
`levelNum`. `methodName` becomes a convenience method for both `logging`
itself and the class returned by `logging.getLoggerClass()` (usually just
`logging.Logger`).
To avoid accidental clobbering of existing attributes, this method will
raise an `AttributeError` if the level name is already an attribute of the
`logging` module or if the method name is already present
Credit: https://stackoverflow.com/a/35804945
Args:
levelName (str): The name of the new logging level (in all caps).
levelNum (int): The priority value of the logging level, lower=more verbose.
methodName (str): The name of the method used to log using this.
If `methodName` is not specified, `levelName.lower()` is used.
Example:
::
>>> add_logging_level('TRACE', logging.DEBUG - 5)
>>> logging.getLogger(__name__).setLevel("TRACE")
>>> logging.getLogger(__name__).trace('that worked')
>>> logging.trace('so did this')
>>> logging.TRACE
5
"""
if not methodName:
methodName = levelName.lower()
if hasattr(logging, levelName):
raise AttributeError('{} already defined in logging module'.format(levelName))
if hasattr(logging, methodName):
raise AttributeError('{} already defined in logging module'.format(methodName))
if hasattr(logging.getLoggerClass(), methodName):
raise AttributeError('{} already defined in logger class'.format(methodName))
# This method was inspired by the answers to Stack Overflow post
# http://stackoverflow.com/q/2183233/2988730, especially
# http://stackoverflow.com/a/13638084/2988730
def logForLevel(self, message, *args, **kwargs):
if self.isEnabledFor(levelNum):
self._log(levelNum, message, args, **kwargs)
def logToRoot(message, *args, **kwargs):
logging.log(levelNum, message, *args, **kwargs)
def adapterLog(self, msg, *args, **kwargs):
"""
Delegate an error call to the underlying logger.
"""
self.log(levelNum, msg, *args, **kwargs)
logging.addLevelName(levelNum, levelName)
setattr(logging, levelName, levelNum)
setattr(logging.getLoggerClass(), methodName, logForLevel)
setattr(logging, methodName, logToRoot)
setattr(logging.LoggerAdapter, methodName, adapterLog)
color_logging = False
def create_logging_handler(format):
date_format = "%Y-%m-%d %H:%M:%S"
style = '{'
sh = logging.StreamHandler()
formatter = logging.Formatter(format, date_format, style)
sh.setFormatter(formatter)
def decorate_emit(fn):
# add methods we need to the class
def new(*args):
levelno = args[0].levelno
if levelno >= logging.CRITICAL:
color = '\x1b[31;1m' # RED
elif levelno >= logging.ERROR:
color = '\x1b[31;1m' # RED
elif levelno >= logging.WARNING:
color = '\x1b[33;1m' # YELLOW
elif levelno >= logging.INFO:
color = '\x1b[32;1m' # GREEN
elif levelno >= logging.DEBUG:
color = '\x1b[36;1m' # CYAN
elif levelno >= logging.EXTRA_DEBUG:
color = '\x1b[35;1m' # MAGENTA
else:
color = '\x1b[0m'
global color_logging
if color_logging:
args[0].levelname = f"{color}{args[0].levelname:^11s}\x1b[0m"
else:
args[0].levelname = f"{args[0].levelname:^11s}"
return fn(*args)
return new
sh.emit = decorate_emit(sh.emit)
return sh
def setup_logging(verbosity: int, color_logging: bool = False) -> None:
"""Configures loggers to provided the desired level of verbosity.
Verbosity 0: Only log info messages created by `unifi-protect-backup`, and all warnings
verbosity 1: Only log info & debug messages created by `unifi-protect-backup`, and all warnings
verbosity 2: Log info & debug messages created by `unifi-protect-backup`, command output, and
all warnings
Verbosity 3: Log debug messages created by `unifi-protect-backup`, command output, all info
messages, and all warnings
Verbosity 4: Log debug messages created by `unifi-protect-backup` command output, all info
messages, all warnings, and websocket data
Verbosity 5: Log websocket data, command output, all debug messages, all info messages and all
warnings
Args:
verbosity (int): The desired level of verbosity
color_logging (bool): If colors should be used in the log (default=False)
"""
globals()['color_logging'] = color_logging
add_logging_level(
'EXTRA_DEBUG',
logging.DEBUG - 1,
)
add_logging_level(
'WEBSOCKET_DATA',
logging.DEBUG - 2,
)
format = "{asctime} [{levelname:^11s}] {name:<42} : {message}"
sh = create_logging_handler(format)
logger = logging.getLogger("unifi_protect_backup")
logger.addHandler(sh)
logger.propagate = False
if verbosity == 0:
logging.basicConfig(level=logging.WARN, handlers=[sh])
logger.setLevel(logging.INFO)
elif verbosity == 1:
logging.basicConfig(level=logging.WARN, handlers=[sh])
logger.setLevel(logging.DEBUG)
elif verbosity == 2:
logging.basicConfig(level=logging.WARN, handlers=[sh])
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 3:
logging.basicConfig(level=logging.INFO, handlers=[sh])
logger.setLevel(logging.EXTRA_DEBUG) # type: ignore
elif verbosity == 4:
logging.basicConfig(level=logging.INFO, handlers=[sh])
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
elif verbosity >= 5:
logging.basicConfig(level=logging.DEBUG, handlers=[sh])
logger.setLevel(logging.WEBSOCKET_DATA) # type: ignore
def setup_event_logger(logger):
format = "{asctime} [{levelname:^11s}] {name:<42} :{event} {message}"
sh = create_logging_handler(format)
logger.addHandler(sh)
logger.propagate = False
_suffixes = ["B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"]
def human_readable_size(num: float):
"""Turns a number into a human readable number with ISO/IEC 80000 binary prefixes.
Based on: https://stackoverflow.com/a/1094933
Args:
num (int): The number to be converted into human readable format
"""
for unit in _suffixes:
if abs(num) < 1024.0:
return f"{num:3.1f}{unit}"
num /= 1024.0
raise ValueError("`num` too large, ran out of prefixes")
def human_readable_to_float(num: str):
pattern = r"([\d.]+)(" + "|".join(_suffixes) + ")"
print(pattern)
result = re.match(pattern, num)
if result is None:
raise ValueError(f"Value '{num}' is not a valid ISO/IEC 80000 binary value")
value = float(result[1])
suffix = result[2]
multiplier = 1024 ** _suffixes.index(suffix)
return value * multiplier
async def get_camera_name(protect: ProtectApiClient, id: str):
"""
Returns the name for the camera with the given ID
If the camera ID is not know, it tries refreshing the cached data
"""
try:
return protect.bootstrap.cameras[id].name
except KeyError:
# Refresh cameras
logger.debug(f"Unknown camera id: '{id}', checking API")
await protect.update(force=True)
try:
name = protect.bootstrap.cameras[id].name
except KeyError:
logger.debug(f"Unknown camera id: '{id}'")
raise
logger.debug(f"Found camera - {id}: {name}")
return name
class SubprocessException(Exception):
def __init__(self, stdout, stderr, returncode):
"""Exception class for when rclone does not exit with `0`.
Args:
stdout (str): What rclone output to stdout
stderr (str): What rclone output to stderr
returncode (str): The return code of the rclone process
"""
super().__init__()
self.stdout: str = stdout
self.stderr: str = stderr
self.returncode: int = returncode
def __str__(self):
"""Turns exception into a human readable form."""
return f"Return Code: {self.returncode}\nStdout:\n{self.stdout}\nStderr:\n{self.stderr}"
def parse_rclone_retention(retention: str) -> relativedelta:
"""
Parses the rclone `retention` parameter into a relativedelta which can then be used
to calculate datetimes
"""
matches = {k: int(v) for v, k in re.findall(r"([\d]+)(ms|s|m|h|d|w|M|y)", retention)}
return relativedelta(
microseconds=matches.get("ms", 0) * 1000,
seconds=matches.get("s", 0),
minutes=matches.get("m", 0),
hours=matches.get("h", 0),
days=matches.get("d", 0),
weeks=matches.get("w", 0),
months=matches.get("M", 0),
years=matches.get("Y", 0),
)
async def run_command(cmd: str, data=None):
"""
Runs the given command returning the exit code, stdout and stderr
"""
proc = await asyncio.create_subprocess_shell(
cmd,
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
stdout, stderr = await proc.communicate(data)
stdout = stdout.decode()
stdout_indented = '\t' + stdout.replace('\n', '\n\t').strip()
stderr = stderr.decode()
stderr_indented = '\t' + stderr.replace('\n', '\n\t').strip()
if proc.returncode != 0:
logger.warn(f"Failed to run: '{cmd}")
logger.warn(f"stdout:\n{stdout_indented}")
logger.warn(f"stderr:\n{stderr_indented}")
else:
logger.extra_debug(f"stdout:\n{stdout_indented}")
logger.extra_debug(f"stderr:\n{stderr_indented}")
return proc.returncode, stdout, stderr
class VideoQueue(asyncio.Queue):
"""A queue that limits the number of bytes it can store rather than discrete entries"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._bytes_sum = 0
def qsize(self):
"""Number of items in the queue."""
return self._bytes_sum
def qsize_files(self):
"""Number of items in the queue."""
return super().qsize()
def _get(self):
data = self._queue.popleft()
self._bytes_sum -= len(data[1])
return data
def _put(self, item: bytes):
self._queue.append(item)
self._bytes_sum += len(item[1])
def full(self, item: bytes = None):
"""Return True if there are maxsize bytes in the queue.
optionally if `item` is provided, it will return False if there is enough space to
fit it, otherwise it will return True
Note: if the Queue was initialized with maxsize=0 (the default),
then full() is never True.
"""
if self._maxsize <= 0:
return False
else:
if item is None:
return self.qsize() >= self._maxsize
else:
return self.qsize() + len(item[1]) >= self._maxsize
async def put(self, item: bytes):
"""Put an item into the queue.
Put an item into the queue. If the queue is full, wait until a free
slot is available before adding item.
"""
if len(item[1]) > self._maxsize:
raise ValueError(
f"Item is larger ({human_readable_size(len(item[1]))}) than the size of the buffer ({human_readable_size(self._maxsize)})"
)
while self.full(item):
putter = self._loop.create_future()
self._putters.append(putter)
try:
await putter
except:
putter.cancel() # Just in case putter is not done yet.
try:
# Clean self._putters from canceled putters.
self._putters.remove(putter)
except ValueError:
# The putter could be removed from self._putters by a
# previous get_nowait call.
pass
if not self.full(item) and not putter.cancelled():
# We were woken up by get_nowait(), but can't take
# the call. Wake up the next in line.
self._wakeup_next(self._putters)
raise
return self.put_nowait(item)
def put_nowait(self, item: bytes):
"""Put an item into the queue without blocking.
If no free slot is immediately available, raise QueueFull.
"""
if self.full(item):
raise asyncio.QueueFull
self._put(item)
self._unfinished_tasks += 1
self._finished.clear()
self._wakeup_next(self._getters)