immich/docs/docs/guides/remote-machine-learning.md
Matthew Momjian 5ef2553bca
feat(docs): microservices be gone (#9558)
* env vars

* docs

* more info on workers

* fix

* typo

* fix bash

* typo

* service -> contianer

* worker -> workers

* updating jobs and workers

* split workers

* redis

* fix conflict

* node -> immich

* add NO_COLOR

* Update docs/docs/administration/jobs-workers.md

* chore: fix broken links now jobs page is gone

* Update environment-variables.md

* Update environment-variables.md

---------

Co-authored-by: Zack Pollard <zackpollard@ymail.com>
2024-05-30 14:25:27 +01:00

40 lines
1.8 KiB
Markdown

# Remote Machine Learning
To alleviate [performance issues on low-memory systems](/docs/FAQ.mdx#why-is-immich-slow-on-low-memory-systems-like-the-raspberry-pi) like the Raspberry Pi, you may also host Immich's machine-learning container on a more powerful system (e.g. your laptop or desktop computer):
- Set the URL in Machine Learning Settings on the Admin Settings page to point to the designated ML system, e.g. `http://workstation:3003`.
- Copy the following `docker-compose.yml` to your ML system.
- Start the container by running `docker compose up -d`.
:::info
Starting with version v1.93.0 face detection work and face recognize were split. From now on face detection is done in the immich_machine_learning container, but facial recognition is done in the `microservices` worker.
:::
:::note
The [hwaccel.ml.yml](https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml) file also needs to be in the same folder if trying to use [hardware acceleration](/docs/features/ml-hardware-acceleration).
:::
```yaml
name: immich_remote_ml
services:
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
# extends:
# file: hwaccel.ml.yml
# service: # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
volumes:
- model-cache:/cache
restart: always
ports:
- 3003:3003
volumes:
model-cache:
```
Please note that version mismatches between both hosts may cause instabilities and bugs, so make sure to always perform updates together.