From 619bd72de946f1efb31b7b5802336c8e6006ca51 Mon Sep 17 00:00:00 2001 From: Mert <101130780+mertalev@users.noreply.github.com> Date: Wed, 26 Mar 2025 19:05:48 -0400 Subject: [PATCH] docs: mention rknn among image options (#17156) mention rknn --- docs/docs/features/ml-hardware-acceleration.md | 2 +- docs/docs/guides/remote-machine-learning.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/docs/features/ml-hardware-acceleration.md b/docs/docs/features/ml-hardware-acceleration.md index 7d001ca6e5..8371e726b9 100644 --- a/docs/docs/features/ml-hardware-acceleration.md +++ b/docs/docs/features/ml-hardware-acceleration.md @@ -71,7 +71,7 @@ You do not need to redo any machine learning jobs after enabling hardware accele 1. If you do not already have it, download the latest [`hwaccel.ml.yml`][hw-file] file and ensure it's in the same folder as the `docker-compose.yml`. 2. In the `docker-compose.yml` under `immich-machine-learning`, uncomment the `extends` section and change `cpu` to the appropriate backend. -3. Still in `immich-machine-learning`, add one of -[armnn, cuda, rocm, openvino] to the `image` section's tag at the end of the line. +3. Still in `immich-machine-learning`, add one of -[armnn, cuda, rocm, openvino, rknn] to the `image` section's tag at the end of the line. 4. Redeploy the `immich-machine-learning` container with these updated settings. ### Confirming Device Usage diff --git a/docs/docs/guides/remote-machine-learning.md b/docs/docs/guides/remote-machine-learning.md index d9b644f106..72ae0e3fa1 100644 --- a/docs/docs/guides/remote-machine-learning.md +++ b/docs/docs/guides/remote-machine-learning.md @@ -23,12 +23,12 @@ name: immich_remote_ml services: immich-machine-learning: container_name: immich_machine_learning - # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino] to the image tag. + # For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag. # Example tag: ${IMMICH_VERSION:-release}-cuda image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release} # extends: # file: hwaccel.ml.yml - # service: # set to one of [armnn, cuda, rocm, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable + # service: # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable volumes: - model-cache:/cache restart: always