[BUG] When hosting ML services on another host, this host has to be online for searching by text #1261

Closed
opened 2026-02-05 01:04:10 +03:00 by OVERLORD · 6 comments
Owner

Originally created by @boobin on GitHub (Aug 21, 2023).

The bug

I put ML container on my desktop computer to run ML jobs faster when it is running, as described on the guide: https://immich.app/docs/guides/machine-learning.

But it means that searching by text is unavailable when my desktop is not available.

It would be great if it was possible to keep searching on the main server, or allow fallback on it if the IMMICH_MACHINE_LEARNING_URL doesn't respond.

The OS that Immich Server is running on

Arch

Version of Immich Server

v1.74.0

Version of Immich Mobile App

v1.74.0

Platform with the issue

  • Server
  • Web
  • Mobile

Your docker-compose.yml content

standard file from the guide, ml container is kept active on the main server even if it is deployed on the desktop computer as well.

Your .env content

IMMICH_MACHINE_LEARNING_URL added as specified in the guide to run ML services on another host

Reproduction steps

1. Deploy the main server and the ML node at (IMMICH_MACHINE_LEARNING_URL)
2. Check that search by text and ML jobs are working
3. Stop the ML node container
4. Check that search by text no longer works

Additional information

No response

Originally created by @boobin on GitHub (Aug 21, 2023). ### The bug I put ML container on my desktop computer to run ML jobs faster when it is running, as described on the guide: https://immich.app/docs/guides/machine-learning. But it means that searching by text is unavailable when my desktop is not available. It would be great if it was possible to keep searching on the main server, or allow fallback on it if the IMMICH_MACHINE_LEARNING_URL doesn't respond. ### The OS that Immich Server is running on Arch ### Version of Immich Server v1.74.0 ### Version of Immich Mobile App v1.74.0 ### Platform with the issue - [X] Server - [ ] Web - [ ] Mobile ### Your docker-compose.yml content ```YAML standard file from the guide, ml container is kept active on the main server even if it is deployed on the desktop computer as well. ``` ### Your .env content ```Shell IMMICH_MACHINE_LEARNING_URL added as specified in the guide to run ML services on another host ``` ### Reproduction steps ```bash 1. Deploy the main server and the ML node at (IMMICH_MACHINE_LEARNING_URL) 2. Check that search by text and ML jobs are working 3. Stop the ML node container 4. Check that search by text no longer works ``` ### Additional information _No response_
Author
Owner

@bo0tzz commented on GitHub (Aug 21, 2023):

This is not a bug, the default search is CLIP-based which uses ML. You can do a metadata search by prefixing your search with m:, like m:query.

@bo0tzz commented on GitHub (Aug 21, 2023): This is not a bug, the default search is CLIP-based which uses ML. You can do a metadata search by prefixing your search with `m:`, like `m:query`.
Author
Owner

@boobin commented on GitHub (Aug 21, 2023):

Thanks for the info !

It could still be improved in a few ways to prevent a 500 error:

  • falling back to local ML container if it exists for CLIP search
  • falling back to metadata search if no ML container available
@boobin commented on GitHub (Aug 21, 2023): Thanks for the info ! It could still be improved in a few ways to prevent a 500 error: - falling back to local ML container if it exists for CLIP search - falling back to metadata search if no ML container available
Author
Owner

@ddshd commented on GitHub (Aug 21, 2023):

  • falling back to local ML container if it exists for CLIP search

You can do this with an additional reverse proxy config, this is what I do with my external ML host

@ddshd commented on GitHub (Aug 21, 2023): > * falling back to local ML container if it exists for CLIP search You can do this with an additional reverse proxy config, this is what I do with my external ML host
Author
Owner

@CDrummond commented on GitHub (Aug 21, 2023):

So the results of the ML are not stored in the main Immich DB? I run Immich on a Pi4, so not enough power for ML. Had hoped one day to run the ML on my laptop, and have the results stored on the Pi so that it would be useable from there. That's really odd.

As an aside, why the need to prefix meta-data searches with m:? I can understand if you want to limit a query to metadata only. But for a generic search I'd expect all areas to be searched. With no ML, having to type m: before searches seems a little user unfriendly.

@CDrummond commented on GitHub (Aug 21, 2023): So the results of the ML are not stored in the main Immich DB? I run Immich on a Pi4, so not enough power for ML. Had hoped one day to run the ML on my laptop, and have the results stored on the Pi so that it would be useable from there. That's really odd. As an aside, why the _need_ to prefix meta-data searches with `m:`? I can understand if you want to _limit_ a query to metadata only. But for a generic search I'd expect all areas to be searched. With no ML, having to type `m:` before searches seems a little user unfriendly.
Author
Owner

@boobin commented on GitHub (Aug 21, 2023):

What i really intended to do was not to remove ML from my server, but to prevent background ML jobs to run on it, making it quite busy for a long time because they run on a potentially large asset batch.

To handle client request like search, i'm perfectly ok to use an ML container on the server.

@ddshd that's a great idea, do you care to share your config ?

@boobin commented on GitHub (Aug 21, 2023): What i really intended to do was not to remove ML from my server, but to prevent background ML jobs to run on it, making it quite busy for a long time because they run on a potentially large asset batch. To handle client request like search, i'm perfectly ok to use an ML container on the server. @ddshd that's a great idea, do you care to share your config ?
Author
Owner

@bo0tzz commented on GitHub (Aug 21, 2023):

So the results of the ML are not stored in the main Immich DB?

They are, but a CLIP search needs to use ML to convert the search query into something that can be compared to the existing ML results in the DB

why the need to prefix meta-data searches with m:?

Because the current search functionality is a basic first pass. As soon as someone has time to work on it, this will be improved with support for actual search filters and such.

@bo0tzz commented on GitHub (Aug 21, 2023): > So the results of the ML are not stored in the main Immich DB? They are, but a CLIP search needs to use ML to convert the search query into something that can be compared to the existing ML results in the DB > why the need to prefix meta-data searches with m:? Because the current search functionality is a basic first pass. As soon as someone has time to work on it, this will be improved with support for actual search filters and such.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: immich-app/immich#1261