Installation
Viseron runs exclusively in Docker.
First of all, choose the appropriate Docker container for your machine.
Builds are published to Docker Hub.
Have a look at the supported architectures below.
Supported architectures
Viserons images support multiple architectures such as amd64, aarch64 and armhf.
Pulling roflcoopter/viseron:latest should automatically pull the correct image for you.
An exception to this is if you have the need for a specific container, eg the CUDA version.
Then you will need to specify your desired image.
The images available are:
| Image | Architecture | Description |
|---|---|---|
roflcoopter/viseron | multiarch | Multiarch image |
roflcoopter/aarch64-viseron | aarch64 | Generic aarch64 image, with RPi4 hardware accelerated decoding/encoding |
roflcoopter/amd64-viseron | amd64 | Generic image |
roflcoopter/amd64-cuda-viseron | amd64 | Image with CUDA support |
roflcoopter/rpi3-viseron | armhf | Built specifically for the RPi3 with hardware accelerated decoding/encoding |
roflcoopter/jetson-nano-viseron | aarch64 | Built specifically for the Jetson Nano with: - GStreamer hardware accelerated decoding - FFmpeg hardware accelerated decoding - CUDA support |
Running Viseron
Below are a few examples on how to run Viseron.
Both docker and docker-compose examples are given.
You have to change the values between the brackets {} to match your setup.
64-bit Linux machine
- Docker
- Docker-Compose
docker run --rm \
-v {segments path}:/segments \
-v {snapshots path}:/snapshots \
-v {thumbnails path}:/thumbnails \
-v {event clips path}:/event_clips \
-v {timelapse path}:/timelapse \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--shm-size=1024mb \
roflcoopter/viseron:latest
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
shm_size: "1024mb"
volumes:
- {segments path}:/segments
- {snapshots path}:/snapshots
- {thumbnails path}:/thumbnails
- {event clips path}:/event_clips
- {timelapse path}:/timelapse
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
64-bit Linux machine with VAAPI (Intel NUC for example)
- Docker
- Docker-Compose
docker run --rm \
-v {segments path}:/segments \
-v {snapshots path}:/snapshots \
-v {thumbnails path}:/thumbnails \
-v {event clips path}:/event_clips \
-v {timelapse path}:/timelapse \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--shm-size=1024mb \
--device /dev/dri \
roflcoopter/viseron:latest
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
shm_size: "1024mb"
volumes:
- {segments path}:/segments
- {snapshots path}:/snapshots
- {thumbnails path}:/thumbnails
- {event clips path}:/event_clips
- {timelapse path}:/timelapse
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
devices:
- /dev/dri
64-bit Linux machine with NVIDIA GPU
- Docker
- Docker-Compose
docker run --rm \
-v {segments path}:/segments \
-v {snapshots path}:/snapshots \
-v {thumbnails path}:/thumbnails \
-v {event clips path}:/event_clips \
-v {timelapse path}:/timelapse \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--shm-size=1024mb \
--runtime=nvidia \
roflcoopter/amd64-cuda-viseron:latest
services:
viseron:
image: roflcoopter/amd64-cuda-viseron:latest
container_name: viseron
shm_size: "1024mb"
volumes:
- {segments path}:/segments
- {snapshots path}:/snapshots
- {thumbnails path}:/thumbnails
- {event clips path}:/event_clips
- {timelapse path}:/timelapse
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
runtime: nvidia
Make sure NVIDIA Container Toolkit is installed.
On a Jetson Nano
- Docker
- Docker-Compose
docker run --rm \
-v {segments path}:/segments \
-v {snapshots path}:/snapshots \
-v {thumbnails path}:/thumbnails \
-v {event clips path}:/event_clips \
-v {timelapse path}:/timelapse \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--shm-size=1024mb \
--runtime=nvidia \
--privileged \
roflcoopter/jetson-nano-viseron:latest
It is a must to run with --privileged so the container gets access to all the needed devices for hardware acceleration.
You can probably get around this by manually mounting all the needed devices but this is not something I have looked into.
services:
viseron:
image: roflcoopter/jetson-nano-viseron:latest
container_name: viseron
shm_size: "1024mb"
volumes:
- {segments path}:/segments
- {snapshots path}:/snapshots
- {thumbnails path}:/thumbnails
- {event clips path}:/event_clips
- {timelapse path}:/timelapse
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
runtime: nvidia
privileged: true
It is a must to run with privileged: true so the container gets access to all the needed devices for hardware acceleration.
You can probably get around this by manually mounting all the needed devices but this is not something I have looked into.
On a RaspberryPi 4
- Docker
- Docker-Compose
docker run --rm \
--privileged \
-v {segments path}:/segments \
-v {snapshots path}:/snapshots \
-v {thumbnails path}:/thumbnails \
-v {event clips path}:/event_clips \
-v {timelapse path}:/timelapse \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /dev/bus/usb:/dev/bus/usb \
-v /opt/vc/lib:/opt/vc/lib \
-p 8888:8888 \
--name viseron \
--shm-size=1024mb \
--device=/dev/video10:/dev/video10 \
--device=/dev/video11:/dev/video11 \
--device=/dev/video12:/dev/video12 \
--device /dev/bus/usb:/dev/bus/usb \
roflcoopter/viseron:latest
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
shm_size: "1024mb"
volumes:
- {segments path}:/segments
- {snapshots path}:/snapshots
- {thumbnails path}:/thumbnails
- {event clips path}:/event_clips
- {timelapse path}:/timelapse
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
devices:
- /dev/video10:/dev/video10
- /dev/video11:/dev/video11
- /dev/video12:/dev/video12
- /dev/bus/usb:/dev/bus/usb
ports:
- 8888:8888
privileged: true
Viseron is quite RAM intensive, mostly because of the object detection.
I do not recommend using an RPi unless you have a Google Coral EdgeTPU.
The CPU is not fast enough and you might run out of memory.
Configure a substream if you plan on running Viseron on an RPi.
RaspberryPi 3b+
- Docker
- Docker-Compose
docker run --rm \
--privileged \
-v {segments path}:/segments \
-v {snapshots path}:/snapshots \
-v {thumbnails path}:/thumbnails \
-v {event clips path}:/event_clips \
-v {timelapse path}:/timelapse \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /opt/vc/lib:/opt/vc/lib \
-p 8888:8888 \
--name viseron \
--shm-size=1024mb \
--device /dev/vchiq:/dev/vchiq \
--device /dev/vcsm:/dev/vcsm \
--device /dev/bus/usb:/dev/bus/usb \
roflcoopter/viseron:latest
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
shm_size: "1024mb"
volumes:
- {segments path}:/segments
- {snapshots path}:/snapshots
- {thumbnails path}:/thumbnails
- {event clips path}:/event_clips
- {timelapse path}:/timelapse
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
- /opt/vc/lib:/opt/vc/lib
devices:
- /dev/vchiq:/dev/vchiq
- /dev/vcsm:/dev/vcsm
- /dev/bus/usb:/dev/bus/usb
ports:
- 8888:8888
privileged: true
Viseron is quite RAM intensive, mostly because of the object detection.
I do not recommend using an RPi unless you have a Google Coral EdgeTPU.
The CPU is not fast enough and you might run out of memory.
To make use of hardware accelerated decoding/encoding you might have to increase the allocated GPU memory.
To do this edit /boot/config.txt and set gpu_mem=256 and then reboot.
Configure a substream if you plan on running Viseron on an RPi.
Viseron will start up immediately and serve the Web UI on port 8888.
Please proceed to the next chapter on how to configure Viseron.
/config- Where the configuration file, database, etc is stored/segments- Where the recordings (video segments) are stored/snapshots- Where the snapshots from object detection, motion detection, etc are stored/thumbnails- Where the thumbnails for recordings triggered bytrigger_event_recordingare stored/event_clips- Where the event clips created bycreate_event_clipare stored
VAAPI hardware acceleration support is built into every amd64 container.
To utilize it you need to add --device /dev/dri to your docker command.
EdgeTPU support is also included in all containers.
To use it, add -v /dev/bus/usb:/dev/bus/usb --privileged to your docker command.
Running Behind a Reverse Proxy
You can run Viseron behind a reverse proxy (like Nginx, Traefik, Caddy, etc.) and you can also use a subpath (e.g., https://yourdomain.com/viseron/).
1. Viseron Configuration
Add the subpath configuration to your config.yaml:
webserver:
subpath: "/viseron" # Must start with / and match your reverse proxy path
2. Reverse Proxy Configuration
Configure your external reverse proxy to pass requests with the subpath to Viseron.
Nginx example:
location /viseron/ {
proxy_pass http://localhost:8888/; # Note: trailing slash strips the /viseron prefix
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (required for live updates)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
The proxy_pass URL must end with / and NOT include the subpath.
- ✅ Correct:
proxy_pass http://localhost:8888/; - ❌ Wrong:
proxy_pass http://localhost:8888/viseron/;
The trailing slash in proxy_pass tells Nginx to strip the /viseron prefix before forwarding.
This way, a request to /viseron/assets/main.js becomes /assets/main.js when forwarded to the container.
The subpath value only needs to be set in two places:
config.yaml:webserver: { subpath: "/viseron" }- External reverse proxy:
location /viseron/ { ... }
Both must:
- Start with a forward slash
/ - Use the exact same path
- Not end with a trailing slash (except in nginx location directives)
User and Group Identifiers
When using volumes (-v flags) permissions issues can happen between the host and the container.
To solve this, you can specify the user PUID and group PGID as environment variables to the container.
Docker command
docker run --rm \
-v {segments path}:/segments \
-v {snapshots path}:/snapshots \
-v {thumbnails path}:/thumbnails \
-v {event clips path}:/event_clips \
-v {timelapse path}:/timelapse \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--shm-size=1024mb \
-e PUID=1000 \
-e PGID=1000 \
roflcoopter/viseron:latest
Docker Compose
Example docker-compose
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
shm_size: "1024mb"
volumes:
- {segments path}:/segments
- {snapshots path}:/snapshots
- {thumbnails path}:/thumbnails
- {event clips path}:/event_clips
- {timelapse path}:/timelapse
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
environment:
- PUID=1000
- PGID=1000
Ensure the volumes are owned on the host by the user you specify.
In this example PUID=1000 and PGID=1000.
To find the UID and GID of your current user you can run this command on the host:
id your_username_here
Viseron runs as root (PUID=0 and PGID=0) by default.
This is because it can be problematic to get hardware acceleration and/or EdgeTPUs to work properly for everyone.
The s6-overlay init scripts do a good job at fixing permissions for other users, but you may still face some issues if you choose to not run as root.
If you do have issues, please open an issue and i will do my best to fix them.