error with search after upgrade #2193

Closed
opened 2026-02-05 05:34:14 +03:00 by OVERLORD · 33 comments
Owner

Originally created by @spupuz on GitHub (Feb 20, 2024).

The bug

got this error when searching items with m:

Error: 500 - 500
ie@https://immich.xxxxxxx.org/_app/immutable/chunks/fetch-client.VozrW5mA.js:1:2905
ce@https://immich.xxxxxxx.org/_app/immutable/chunks/fetch-client.VozrW5mA.js:1:2829

The OS that Immich Server is running on

docker

Version of Immich Server

1.95

Version of Immich Mobile App

1.95

Platform with the issue

  • Server
  • Web
  • Mobile

Your docker-compose.yml content

version: "3.8"

services:
  immich-server:
    image: altran1502/immich-server:release

    command: ["start-server.sh"]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
    env_file:
      - .env
    ports:
      - 2283:3001
    environment:
      - NODE_ENV=production
      - TYPESENSE_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxx"
    labels:
            com.centurylinklabs.watchtower.depends-on: "/database"
    depends_on:
      - redis
      - database
    restart: always

  immich-microservices:
    image: altran1502/immich-server:release
    command: ["start-microservices.sh"]
    devices:
      - /dev/dri:/dev/dri 
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
    env_file:
      - .env
    environment:
      - NODE_ENV=production
      - TYPESENSE_API_KEY="xxxxxxxxxxxxxxxxxxxxxx"
    labels:
            com.centurylinklabs.watchtower.depends-on: "/database"
    depends_on:
      - redis
      - database
    restart: always
    
  immich-machine-learning:
    image: altran1502/immich-machine-learning:release-openvino
    device_cgroup_rules:
      - "c 189:* rmw"
    devices:
      - /dev/dri:/dev/dri
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /zfsmirror/immich/cache:/cache
      - /dev/bus/usb:/dev/bus/usb      
    env_file:
      - .env
    environment:
      - NODE_ENV=production
    labels:
            com.centurylinklabs.watchtower.depends-on: "/database"
    depends_on:
      - database
    restart: always

  redis:
    container_name: immich_redis
    image: redis:6.2
    restart: always

  database:
    container_name: immich_postgres
    image: tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
    env_file:
      - .env
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      PG_DATA: /var/lib/postgresql/data
    volumes:
      - /zfsmirror/immich/pgdata:/var/lib/postgresql/data
    restart: always

Your .env content

-

Reproduction steps

upgrade and restart as release notes

Additional information

Originally created by @spupuz on GitHub (Feb 20, 2024). ### The bug got this error when searching items with m: Error: 500 - 500 ie@https://immich.xxxxxxx.org/_app/immutable/chunks/fetch-client.VozrW5mA.js:1:2905 ce@https://immich.xxxxxxx.org/_app/immutable/chunks/fetch-client.VozrW5mA.js:1:2829 ### The OS that Immich Server is running on docker ### Version of Immich Server 1.95 ### Version of Immich Mobile App 1.95 ### Platform with the issue - [X] Server - [X] Web - [ ] Mobile ### Your docker-compose.yml content ```YAML version: "3.8" services: immich-server: image: altran1502/immich-server:release command: ["start-server.sh"] volumes: - ${UPLOAD_LOCATION}:/usr/src/app/upload env_file: - .env ports: - 2283:3001 environment: - NODE_ENV=production - TYPESENSE_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxx" labels: com.centurylinklabs.watchtower.depends-on: "/database" depends_on: - redis - database restart: always immich-microservices: image: altran1502/immich-server:release command: ["start-microservices.sh"] devices: - /dev/dri:/dev/dri volumes: - ${UPLOAD_LOCATION}:/usr/src/app/upload env_file: - .env environment: - NODE_ENV=production - TYPESENSE_API_KEY="xxxxxxxxxxxxxxxxxxxxxx" labels: com.centurylinklabs.watchtower.depends-on: "/database" depends_on: - redis - database restart: always immich-machine-learning: image: altran1502/immich-machine-learning:release-openvino device_cgroup_rules: - "c 189:* rmw" devices: - /dev/dri:/dev/dri volumes: - ${UPLOAD_LOCATION}:/usr/src/app/upload - /zfsmirror/immich/cache:/cache - /dev/bus/usb:/dev/bus/usb env_file: - .env environment: - NODE_ENV=production labels: com.centurylinklabs.watchtower.depends-on: "/database" depends_on: - database restart: always redis: container_name: immich_redis image: redis:6.2 restart: always database: container_name: immich_postgres image: tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0 env_file: - .env environment: POSTGRES_PASSWORD: ${DB_PASSWORD} POSTGRES_USER: ${DB_USERNAME} POSTGRES_DB: ${DB_DATABASE_NAME} PG_DATA: /var/lib/postgresql/data volumes: - /zfsmirror/immich/pgdata:/var/lib/postgresql/data restart: always ``` ### Your .env content ```Shell - ``` ### Reproduction steps ```bash upgrade and restart as release notes ``` ### Additional information -
Author
Owner

@bo0tzz commented on GitHub (Feb 20, 2024):

This is not a useful error message, please post the containers' logs.

@bo0tzz commented on GitHub (Feb 20, 2024): This is not a useful error message, please post the containers' logs.
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

here is an extract of server container log:

[Nest] 8  - 02/20/2024, 7:03:31 PM     LOG [CommunicationRepository] Websocket Disconnect: 6ZRoX1bljXpG5t2OAAAB

[Nest] 8  - 02/20/2024, 7:03:34 PM     LOG [CommunicationRepository] Websocket Connect:    B2so2x5IoOlDFnTFAAAD

[Nest] 8  - 02/20/2024, 7:03:39 PM   ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error

    at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19)

    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

    at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Failed to search smart

[Nest] 8  - 02/20/2024, 7:03:39 PM   ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error

    at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19)

    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

    at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Error: Machine learning request for clip failed with status 500: Internal Server Error
@spupuz commented on GitHub (Feb 20, 2024): here is an extract of server container log: ``` [Nest] 8 - 02/20/2024, 7:03:31 PM LOG [CommunicationRepository] Websocket Disconnect: 6ZRoX1bljXpG5t2OAAAB [Nest] 8 - 02/20/2024, 7:03:34 PM LOG [CommunicationRepository] Websocket Connect: B2so2x5IoOlDFnTFAAAD [Nest] 8 - 02/20/2024, 7:03:39 PM ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Failed to search smart [Nest] 8 - 02/20/2024, 7:03:39 PM ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Error: Machine learning request for clip failed with status 500: Internal Server Error ```
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

here the output of postgres container log:


PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-02-20 19:01:04.386 UTC [1] LOG:  starting PostgreSQL 14.10 (Debian 14.10-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit

2024-02-20 19:01:04.386 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432

2024-02-20 19:01:04.386 UTC [1] LOG:  listening on IPv6 address "::", port 5432

2024-02-20 19:01:04.398 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"

2024-02-20 19:01:04.419 UTC [27] LOG:  database system was shut down at 2024-02-20 19:01:01 UTC

2024-02-20 19:01:04.454 UTC [1] LOG:  database system is ready to accept connections

2024-02-20 19:01:05.764 UTC [41] ERROR:  pgvecto.rs: The index is not existing in the background worker.

	ADVICE: Drop or rebuild the index.

2024-02-20 19:01:05.764 UTC [41] STATEMENT:  

	          SELECT idx_status

	          FROM pg_vector_index_stat

	          WHERE indexname = $1

2024-02-20 19:01:10.116 UTC [41] ERROR:  internal error: entered unreachable code

2024-02-20 19:01:10.116 UTC [41] STATEMENT:  REINDEX INDEX clip_index

2024-02-20 19:01:34.058 UTC [41] ERROR:  pgvecto.rs: The index is not existing in the background worker.

	ADVICE: Drop or rebuild the index.

2024-02-20 19:01:34.058 UTC [41] STATEMENT:  

	          SELECT idx_status

	          FROM pg_vector_index_stat

	          WHERE indexname = $1

2024-02-20 19:01:41.248 UTC [41] ERROR:  internal error: entered unreachable code

2024-02-20 19:01:41.248 UTC [41] STATEMENT:  REINDEX INDEX face_index
@spupuz commented on GitHub (Feb 20, 2024): here the output of postgres container log: ``` PostgreSQL Database directory appears to contain a database; Skipping initialization 2024-02-20 19:01:04.386 UTC [1] LOG: starting PostgreSQL 14.10 (Debian 14.10-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit 2024-02-20 19:01:04.386 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2024-02-20 19:01:04.386 UTC [1] LOG: listening on IPv6 address "::", port 5432 2024-02-20 19:01:04.398 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2024-02-20 19:01:04.419 UTC [27] LOG: database system was shut down at 2024-02-20 19:01:01 UTC 2024-02-20 19:01:04.454 UTC [1] LOG: database system is ready to accept connections 2024-02-20 19:01:05.764 UTC [41] ERROR: pgvecto.rs: The index is not existing in the background worker. ADVICE: Drop or rebuild the index. 2024-02-20 19:01:05.764 UTC [41] STATEMENT: SELECT idx_status FROM pg_vector_index_stat WHERE indexname = $1 2024-02-20 19:01:10.116 UTC [41] ERROR: internal error: entered unreachable code 2024-02-20 19:01:10.116 UTC [41] STATEMENT: REINDEX INDEX clip_index 2024-02-20 19:01:34.058 UTC [41] ERROR: pgvecto.rs: The index is not existing in the background worker. ADVICE: Drop or rebuild the index. 2024-02-20 19:01:34.058 UTC [41] STATEMENT: SELECT idx_status FROM pg_vector_index_stat WHERE indexname = $1 2024-02-20 19:01:41.248 UTC [41] ERROR: internal error: entered unreachable code 2024-02-20 19:01:41.248 UTC [41] STATEMENT: REINDEX INDEX face_index ```
Author
Owner

@mmomjian commented on GitHub (Feb 20, 2024):

Can you please post the output of sudo docker ps -a | grep immich

@mmomjian commented on GitHub (Feb 20, 2024): Can you please post the output of `sudo docker ps -a | grep immich`
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

here it is:

root@omvba ~# sudo docker ps -a | grep immich
a2fdfe1ff7c9   altran1502/immich-machine-learning:release-openvino   "tini -- ./start.sh"     About a minute ago   Up About a minute                                                                                                                                                                                               immich-immich-machine-learning-1
8126fa031bdb   altran1502/immich-server:release                      "tini -- /bin/bash s…"   About a minute ago   Up About a minute        3001/tcp                                                                                                                                                                               immich-immich-microservices-1
dfd2cafbbaf1   altran1502/immich-server:release                      "tini -- /bin/bash s…"   About a minute ago   Up About a minute        0.0.0.0:2283->3001/tcp, :::2283->3001/tcp                                                                                                                                              immich-immich-server-1
0ec9250e6698   tensorchord/pgvecto-rs:pg14-v0.2.0                    "docker-entrypoint.s…"   About a minute ago   Up About a minute        5432/tcp                                                                                                                                                                               immich_postgres
a6595306b806   redis:6.2                                             "docker-entrypoint.s…"   About a minute ago   Up About a minute        6379/tcp    
```                                                                                                                                                                           immich_redis
@spupuz commented on GitHub (Feb 20, 2024): here it is: ``` root@omvba ~# sudo docker ps -a | grep immich a2fdfe1ff7c9 altran1502/immich-machine-learning:release-openvino "tini -- ./start.sh" About a minute ago Up About a minute immich-immich-machine-learning-1 8126fa031bdb altran1502/immich-server:release "tini -- /bin/bash s…" About a minute ago Up About a minute 3001/tcp immich-immich-microservices-1 dfd2cafbbaf1 altran1502/immich-server:release "tini -- /bin/bash s…" About a minute ago Up About a minute 0.0.0.0:2283->3001/tcp, :::2283->3001/tcp immich-immich-server-1 0ec9250e6698 tensorchord/pgvecto-rs:pg14-v0.2.0 "docker-entrypoint.s…" About a minute ago Up About a minute 5432/tcp immich_postgres a6595306b806 redis:6.2 "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp ``` immich_redis
Author
Owner

@mmomjian commented on GitHub (Feb 20, 2024):

Sorry, can you also post sudo docker inspect immich | grep version? I want to be sure what version is in that repository, as the current repo is ghcr.io/immich-app/immich-server

@mmomjian commented on GitHub (Feb 20, 2024): Sorry, can you also post `sudo docker inspect immich | grep version`? I want to be sure what version is in that repository, as the current repo is `ghcr.io/immich-app/immich-server`
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

Sorry, can you also post sudo docker inspect immich | grep version? I want to be sure what version is in that repository, as the current repo is ghcr.io/immich-app/immich-server

root@omvba ~ [1|1]# sudo docker inspect immich-immich-server-1 | grep version
"com.docker.compose.version": "2.20.2",
"org.opencontainers.image.version": "v1.95.0"

@spupuz commented on GitHub (Feb 20, 2024): > Sorry, can you also post `sudo docker inspect immich | grep version`? I want to be sure what version is in that repository, as the current repo is `ghcr.io/immich-app/immich-server` root@omvba ~ [1|1]# sudo docker inspect immich-immich-server-1 | grep version "com.docker.compose.version": "2.20.2", "org.opencontainers.image.version": "v1.95.0"
Author
Owner

@mmomjian commented on GitHub (Feb 20, 2024):

Ok, good, so that's not the issue, you are on the latest 1.95.0. I'm unsure in that case why it is trying to run REINDEX, as I thought this update was supposed to run DROP INDEX / CREATE INDEX. @mertalev ? Seems somewhat similar to my issue but using the supported vectors container.

@mmomjian commented on GitHub (Feb 20, 2024): Ok, good, so that's not the issue, you are on the latest 1.95.0. I'm unsure in that case why it is trying to run REINDEX, as I thought this update was supposed to run DROP INDEX / CREATE INDEX. @mertalev ? Seems somewhat similar to my issue but using the supported vectors container.
Author
Owner

@Thormir84 commented on GitHub (Feb 20, 2024):

I have the same issue after the update.

@Thormir84 commented on GitHub (Feb 20, 2024): I have the same issue after the update.
Author
Owner

@mmomjian commented on GitHub (Feb 20, 2024):

Do you have any changes to the PG database, such as added users / roles?

@mmomjian commented on GitHub (Feb 20, 2024): Do you have any changes to the PG database, such as added users / roles?
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

no changes on my side

Do you have any changes to the PG database, such as added users / roles?

@spupuz commented on GitHub (Feb 20, 2024): no changes on my side > Do you have any changes to the PG database, such as added users / roles?
Author
Owner

@Thormir84 commented on GitHub (Feb 20, 2024):

Do you have any changes to the PG database, such as added users / roles?

No, no changes after the update.

@Thormir84 commented on GitHub (Feb 20, 2024): > Do you have any changes to the PG database, such as added users / roles? No, no changes after the update.
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

can be this related to open.vino?

@spupuz commented on GitHub (Feb 20, 2024): can be this related to open.vino?
Author
Owner

@mmomjian commented on GitHub (Feb 20, 2024):

Changes even before the update - is your pgvectors database completely stock and only used for immich, or are there added users / permission changes / other data in the same database?

@mmomjian commented on GitHub (Feb 20, 2024): Changes even before the update - is your pgvectors database completely stock and only used for immich, or are there added users / permission changes / other data in the same database?
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

Changes even before the update - is your pgvectors database completely stock and only used for immich, or are there added users / permission changes / other data in the same database?

pg stock only used for immich

@spupuz commented on GitHub (Feb 20, 2024): > Changes even before the update - is your pgvectors database completely stock and only used for immich, or are there added users / permission changes / other data in the same database? pg stock only used for immich
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

machine learning log:

                             │ /usr/src/app/models/base.py:55 in load          │

                             │                                                 │

                             │    52 │   │   │   return                        │

                             │    53 │   │   self.download()                   │

                             │    54 │   │   log.info(f"Loading {self.model_ty │

                             │       to memory")                               │

                             │ ❱  55 │   │   self._load()                      │

                             │    56 │   │   self.loaded = True                │

                             │    57 │                                         │

                             │    58 │   def predict(self, inputs: Any, **mode │

                             │                                                 │

                             │ /usr/src/app/models/clip.py:146 in _load        │

                             │                                                 │

                             │   143 │   │   super().__init__(clean_name(model │

                             │   144 │                                         │

                             │   145 │   def _load(self) -> None:              │

                             │ ❱ 146 │   │   super()._load()                   │

                             │   147 │   │   self._load_tokenizer()            │

                             │   148 │   │                                     │

                             │   149 │   │   size: list[int] | int = self.prep │

                             │                                                 │

                             │ /usr/src/app/models/clip.py:36 in _load         │

                             │                                                 │

                             │    33 │   def _load(self) -> None:              │

                             │    34 │   │   if self.mode == "text" or self.mo │

                             │    35 │   │   │   log.debug(f"Loading clip text │

                             │ ❱  36 │   │   │   self.text_model = self._make_ │

                             │    37 │   │   │   log.debug(f"Loaded clip text  │

                             │    38 │   │                                     │

                             │    39 │   │   if self.mode == "vision" or self. │

                             │                                                 │

                             │ /usr/src/app/models/base.py:127 in              │

                             │ _make_session                                   │

                             │                                                 │

                             │   124 │   │   │   case ".armnn":                │

                             │   125 │   │   │   │   session = AnnSession(mode │

                             │   126 │   │   │   case ".onnx":                 │

                             │ ❱ 127 │   │   │   │   session = ort.InferenceSe │

                             │   128 │   │   │   │   │   model_path.as_posix() │

                             │   129 │   │   │   │   │   sess_options=self.ses │

                             │   130 │   │   │   │   │   providers=self.provid │

                             │                                                 │

                             │ /opt/venv/lib/python3.10/site-packages/onnxrunt │

                             │ ime/capi/onnxruntime_inference_collection.py:38 │

                             │ 8 in __init__                                   │

                             │                                                 │

                             │   385 │   │   disabled_optimizers = kwargs["dis │

                             │       kwargs else None                          │

                             │   386 │   │                                     │

                             │   387 │   │   try:                              │

                             │ ❱ 388 │   │   │   self._create_inference_sessio │

                             │       disabled_optimizers)                      │

                             │   389 │   │   except (ValueError, RuntimeError) │

                             │   390 │   │   │   if self._enable_fallback:     │

                             │   391 │   │   │   │   try:                      │

                             │                                                 │

                             │ /opt/venv/lib/python3.10/site-packages/onnxrunt │

                             │ ime/capi/onnxruntime_inference_collection.py:44 │

                             │ 0 in _create_inference_session                  │

                             │                                                 │

                             │   437 │   │   │   disabled_optimizers = set(dis │

                             │   438 │   │                                     │

                             │   439 │   │   # initialize the C++ InferenceSes │

                             │ ❱ 440 │   │   sess.initialize_session(providers │

                             │   441 │   │                                     │

                             │   442 │   │   self._sess = sess                 │

                             │   443 │   │   self._sess_options = self._sess.s │

                             ╰─────────────────────────────────────────────────╯

                             RuntimeException: [ONNXRuntimeError] : 6 :         

                             RUNTIME_EXCEPTION : Exception during               

                             initialization:                                    

                             /home/onnxruntimedev/onnxruntime/onnxruntime/core/p

                             roviders/openvino/ov_interface.cc:53               

                             onnxruntime::openvino_ep::OVExeNetwork             

                             onnxruntime::openvino_ep::OVCore::LoadNetwork(const

                             string&, std::string&, ov::AnyMap&, std::string)   

                             [OpenVINO-EP]  Exception while Loading Network for 

                             graph:                                             

                             OpenVINOExecutionProvider_OpenVINO-EP-subgraph_2_0C

                             heck 'false' failed at                             

                             src/inference/src/core.cpp:149:                    

                             invalid external data:                             

                             ExternalDataInfo(data_full_path:                   

                             a52fef30-d022-11ee-8d9a-0242c0a84004, offset: 0,   

                             data_length: 101187584)                            

                                                                                

                                                                                

[02/20/24 20:47:44] INFO     Shutting down due to inactivity.                   

[02/20/24 20:47:44] INFO     Shutting down                                      

[02/20/24 20:47:45] INFO     Waiting for application shutdown.                  

[02/20/24 20:47:45] INFO     Application shutdown complete.                     

[02/20/24 20:47:45] INFO     Finished server process [21]                       

[02/20/24 20:47:45] INFO     Worker exiting (pid: 21)                           

[02/20/24 20:47:45] INFO     Booting worker with pid: 48                        

[02/20/24 20:47:47] INFO     Started server process [48]                        

[02/20/24 20:47:47] INFO     Waiting for application startup.                   

[02/20/24 20:47:47] INFO     Created in-memory cache with unloading after 300s  

                             of inactivity.                                     

[02/20/24 20:47:47] INFO     Initialized request thread pool with 12 threads.   

[02/20/24 20:47:47] INFO     Application startup complete.          
@spupuz commented on GitHub (Feb 20, 2024): machine learning log: ``` │ /usr/src/app/models/base.py:55 in load │ │ │ │ 52 │ │ │ return │ │ 53 │ │ self.download() │ │ 54 │ │ log.info(f"Loading {self.model_ty │ │ to memory") │ │ ❱ 55 │ │ self._load() │ │ 56 │ │ self.loaded = True │ │ 57 │ │ │ 58 │ def predict(self, inputs: Any, **mode │ │ │ │ /usr/src/app/models/clip.py:146 in _load │ │ │ │ 143 │ │ super().__init__(clean_name(model │ │ 144 │ │ │ 145 │ def _load(self) -> None: │ │ ❱ 146 │ │ super()._load() │ │ 147 │ │ self._load_tokenizer() │ │ 148 │ │ │ │ 149 │ │ size: list[int] | int = self.prep │ │ │ │ /usr/src/app/models/clip.py:36 in _load │ │ │ │ 33 │ def _load(self) -> None: │ │ 34 │ │ if self.mode == "text" or self.mo │ │ 35 │ │ │ log.debug(f"Loading clip text │ │ ❱ 36 │ │ │ self.text_model = self._make_ │ │ 37 │ │ │ log.debug(f"Loaded clip text │ │ 38 │ │ │ │ 39 │ │ if self.mode == "vision" or self. │ │ │ │ /usr/src/app/models/base.py:127 in │ │ _make_session │ │ │ │ 124 │ │ │ case ".armnn": │ │ 125 │ │ │ │ session = AnnSession(mode │ │ 126 │ │ │ case ".onnx": │ │ ❱ 127 │ │ │ │ session = ort.InferenceSe │ │ 128 │ │ │ │ │ model_path.as_posix() │ │ 129 │ │ │ │ │ sess_options=self.ses │ │ 130 │ │ │ │ │ providers=self.provid │ │ │ │ /opt/venv/lib/python3.10/site-packages/onnxrunt │ │ ime/capi/onnxruntime_inference_collection.py:38 │ │ 8 in __init__ │ │ │ │ 385 │ │ disabled_optimizers = kwargs["dis │ │ kwargs else None │ │ 386 │ │ │ │ 387 │ │ try: │ │ ❱ 388 │ │ │ self._create_inference_sessio │ │ disabled_optimizers) │ │ 389 │ │ except (ValueError, RuntimeError) │ │ 390 │ │ │ if self._enable_fallback: │ │ 391 │ │ │ │ try: │ │ │ │ /opt/venv/lib/python3.10/site-packages/onnxrunt │ │ ime/capi/onnxruntime_inference_collection.py:44 │ │ 0 in _create_inference_session │ │ │ │ 437 │ │ │ disabled_optimizers = set(dis │ │ 438 │ │ │ │ 439 │ │ # initialize the C++ InferenceSes │ │ ❱ 440 │ │ sess.initialize_session(providers │ │ 441 │ │ │ │ 442 │ │ self._sess = sess │ │ 443 │ │ self._sess_options = self._sess.s │ ╰─────────────────────────────────────────────────╯ RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /home/onnxruntimedev/onnxruntime/onnxruntime/core/p roviders/openvino/ov_interface.cc:53 onnxruntime::openvino_ep::OVExeNetwork onnxruntime::openvino_ep::OVCore::LoadNetwork(const string&, std::string&, ov::AnyMap&, std::string) [OpenVINO-EP] Exception while Loading Network for graph: OpenVINOExecutionProvider_OpenVINO-EP-subgraph_2_0C heck 'false' failed at src/inference/src/core.cpp:149: invalid external data: ExternalDataInfo(data_full_path: a52fef30-d022-11ee-8d9a-0242c0a84004, offset: 0, data_length: 101187584) [02/20/24 20:47:44] INFO Shutting down due to inactivity. [02/20/24 20:47:44] INFO Shutting down [02/20/24 20:47:45] INFO Waiting for application shutdown. [02/20/24 20:47:45] INFO Application shutdown complete. [02/20/24 20:47:45] INFO Finished server process [21] [02/20/24 20:47:45] INFO Worker exiting (pid: 21) [02/20/24 20:47:45] INFO Booting worker with pid: 48 [02/20/24 20:47:47] INFO Started server process [48] [02/20/24 20:47:47] INFO Waiting for application startup. [02/20/24 20:47:47] INFO Created in-memory cache with unloading after 300s of inactivity. [02/20/24 20:47:47] INFO Initialized request thread pool with 12 threads. [02/20/24 20:47:47] INFO Application startup complete. ```
Author
Owner

@spupuz commented on GitHub (Feb 20, 2024):

after last update i added open-vino but never checked for search does this can be related to openvino?

@spupuz commented on GitHub (Feb 20, 2024): after last update i added open-vino but never checked for search does this can be related to openvino?
Author
Owner

@Skydiver84de commented on GitHub (Feb 21, 2024):

Had the same error. After switching back from open-vino to standard container the search is working again.

@Skydiver84de commented on GitHub (Feb 21, 2024): Had the same error. After switching back from open-vino to standard container the search is working again.
Author
Owner

@spupuz commented on GitHub (Feb 21, 2024):

Had the same error. After switching back from open-vino to standard container the search is working again.

i can confirm that removing open-vino it works

@spupuz commented on GitHub (Feb 21, 2024): > Had the same error. After switching back from open-vino to standard container the search is working again. i can confirm that removing open-vino it works
Author
Owner

@codeicetea commented on GitHub (Feb 21, 2024):

I'm experiencing the same issue as the original poster.
Regarding docker logs from immich server I receive the same as OP.

To add more context that might be useful, I am currently operating two instances as part of a migration process for Immich from virtual machine A to virtual machine B. On instance A, I upgraded Immich from version 1.91.3 and encountered no issues. However, instance B, which is a new installation accompanied by a pg dump from instance A, is experiencing the same problem.

Please let me know if providing additional details would be beneficial!

Edit:
The issue seems to be limited for contextual searches in my case - people filter/search on the other hand is working fine

@codeicetea commented on GitHub (Feb 21, 2024): I'm experiencing the same issue as the original poster. Regarding docker logs from immich server I receive the same as OP. To add more context that might be useful, I am currently operating two instances as part of a migration process for Immich from virtual machine A to virtual machine B. On instance A, I upgraded Immich from version 1.91.3 and encountered no issues. However, instance B, which is a new installation accompanied by a pg dump from instance A, is experiencing the same problem. Please let me know if providing additional details would be beneficial! Edit: The issue seems to be limited for contextual searches in my case - people filter/search on the other hand is working fine
Author
Owner

@spupuz commented on GitHub (Feb 21, 2024):

Had the same error. After switching back from open-vino to standard container the search is working again.

at this point not sure if it was working with open-vino last time when i had upgraded to using hw accelleration with open-vino

@spupuz commented on GitHub (Feb 21, 2024): > Had the same error. After switching back from open-vino to standard container the search is working again. at this point not sure if it was working with open-vino last time when i had upgraded to using hw accelleration with open-vino
Author
Owner

@spupuz commented on GitHub (Feb 21, 2024):

I'm experiencing the same issue as the original poster. Regarding docker logs from immich server I receive the same as OP.

To add more context that might be useful, I am currently operating two instances as part of a migration process for Immich from virtual machine A to virtual machine B. On instance A, I upgraded Immich from version 1.91.3 and encountered no issues. However, instance B, which is a new installation accompanied by a pg dump from instance A, is experiencing the same problem.

Please let me know if providing additional details would be beneficial!

do you use open-vino?

@spupuz commented on GitHub (Feb 21, 2024): > I'm experiencing the same issue as the original poster. Regarding docker logs from immich server I receive the same as OP. > > To add more context that might be useful, I am currently operating two instances as part of a migration process for Immich from virtual machine A to virtual machine B. On instance A, I upgraded Immich from version 1.91.3 and encountered no issues. However, instance B, which is a new installation accompanied by a pg dump from instance A, is experiencing the same problem. > > Please let me know if providing additional details would be beneficial! do you use open-vino?
Author
Owner

@codeicetea commented on GitHub (Feb 21, 2024):

I'm experiencing the same issue as the original poster. Regarding docker logs from immich server I receive the same as OP.
To add more context that might be useful, I am currently operating two instances as part of a migration process for Immich from virtual machine A to virtual machine B. On instance A, I upgraded Immich from version 1.91.3 and encountered no issues. However, instance B, which is a new installation accompanied by a pg dump from instance A, is experiencing the same problem.
Please let me know if providing additional details would be beneficial!

do you use open-vino?

no open-vino nor another hw acceleration here

@codeicetea commented on GitHub (Feb 21, 2024): > > I'm experiencing the same issue as the original poster. Regarding docker logs from immich server I receive the same as OP. > > To add more context that might be useful, I am currently operating two instances as part of a migration process for Immich from virtual machine A to virtual machine B. On instance A, I upgraded Immich from version 1.91.3 and encountered no issues. However, instance B, which is a new installation accompanied by a pg dump from instance A, is experiencing the same problem. > > Please let me know if providing additional details would be beneficial! > > do you use open-vino? no open-vino nor another hw acceleration here
Author
Owner

@alextran1502 commented on GitHub (Feb 21, 2024):

If you try the default setup for machine learning i.e without acceleration, does it help?

@alextran1502 commented on GitHub (Feb 21, 2024): If you try the default setup for machine learning i.e without acceleration, does it help?
Author
Owner

@spupuz commented on GitHub (Feb 21, 2024):

If you try the default setup for machine learning i.e without acceleration, does it help?

it worked in my case

@spupuz commented on GitHub (Feb 21, 2024): > If you try the default setup for machine learning i.e without acceleration, does it help? it worked in my case
Author
Owner

@codeicetea commented on GitHub (Feb 21, 2024):

If you try the default setup for machine learning i.e without acceleration, does it help?

I currently do not have any hw acceleration active for the instance which is causing the issues.
docker-compose snippet for reference:

immich-machine-learning:
    user: 4004:4004
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    volumes:
      - model-cache:/cache
    env_file:
      - stack.env
    restart: always
@codeicetea commented on GitHub (Feb 21, 2024): > If you try the default setup for machine learning i.e without acceleration, does it help? I currently do not have any hw acceleration active for the instance which is causing the issues. docker-compose snippet for reference: ``` immich-machine-learning: user: 4004:4004 container_name: immich_machine_learning image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release} volumes: - model-cache:/cache env_file: - stack.env restart: always ```
Author
Owner

@alextran1502 commented on GitHub (Feb 21, 2024):

@codeicetea what log are you seeing?

@alextran1502 commented on GitHub (Feb 21, 2024): @codeicetea what log are you seeing?
Author
Owner

@codeicetea commented on GitHub (Feb 21, 2024):

@codeicetea what log are you seeing?

ui:

Error: 500 (500)

Stacktrace
Error: Error: 500
    at Object.ce [as ok] (http://192.168.50.14:2283/_app/immutable/chunks/fetch-client.VozrW5mA.js:1:2829)
    at async ht (http://192.168.50.14:2283/_app/immutable/nodes/24.a2RJfZgk.js:1:2020)
    at async be (http://192.168.50.14:2283/_app/immutable/chunks/entry.xWy4dkDy.js:1:12873)

immich server:

[Nest] 7  - 02/20/2024, 9:12:48 PM     LOG [CommunicationRepository] Websocket Connect:    iiG4HY2xvRRRHOOyAAAD
[Nest] 7  - 02/20/2024, 9:12:54 PM   ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error
    at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Failed to search smart
[Nest] 7  - 02/20/2024, 9:12:54 PM   ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error
    at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Error: Machine learning request for clip failed with status 500: Internal Server Error

Edit:
fix copy paste

@codeicetea commented on GitHub (Feb 21, 2024): > @codeicetea what log are you seeing? ui: ``` Error: 500 (500) Stacktrace Error: Error: 500 at Object.ce [as ok] (http://192.168.50.14:2283/_app/immutable/chunks/fetch-client.VozrW5mA.js:1:2829) at async ht (http://192.168.50.14:2283/_app/immutable/nodes/24.a2RJfZgk.js:1:2020) at async be (http://192.168.50.14:2283/_app/immutable/chunks/entry.xWy4dkDy.js:1:12873) ``` immich server: ``` [Nest] 7 - 02/20/2024, 9:12:48 PM LOG [CommunicationRepository] Websocket Connect: iiG4HY2xvRRRHOOyAAAD [Nest] 7 - 02/20/2024, 9:12:54 PM ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Failed to search smart [Nest] 7 - 02/20/2024, 9:12:54 PM ERROR [Error: Machine learning request for clip failed with status 500: Internal Server Error at MachineLearningRepository.predict (/usr/src/app/dist/infra/repositories/machine-learning.repository.js:22:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async SearchService.searchSmart (/usr/src/app/dist/domain/search/search.service.js:81:27)] Error: Machine learning request for clip failed with status 500: Internal Server Error ``` Edit: fix copy paste
Author
Owner

@alextran1502 commented on GitHub (Feb 21, 2024):

@codeicetea and the machine-learning log?

@alextran1502 commented on GitHub (Feb 21, 2024): @codeicetea and the machine-learning log?
Author
Owner

@codeicetea commented on GitHub (Feb 21, 2024):

@alextran1502 logs:

[02/20/24 21:41:51] INFO     Setting 'ViT-B-32__openai' execution providers to
                             ['CPUExecutionProvider'], in descending order of
                             preference
[02/20/24 21:41:51] INFO     Downloading clip model 'ViT-B-32__openai'. This may
                             take a while.
[02/20/24 21:41:51] WARNING  Failed to load clip model
                             'ViT-B-32__openai'.Clearing cache and retrying.
[02/20/24 21:41:51] WARNING  Attempted to clear cache for model
                             'ViT-B-32__openai', but cache directory does not
                             exist
[02/20/24 21:41:51] INFO     Downloading clip model 'ViT-B-32__openai'. This may
                             take a while.
[02/20/24 21:41:51] ERROR    Exception in ASGI application

the ASGI exception summarizes with
PermissionError: [Errno 13] Permission denied: '/cache/clip'

Stacktrace for reference:

[02/20/24 21:41:51] ERROR    Exception in ASGI application

                             ╭─────── Traceback (most recent call last) ───────╮
                             │ /usr/src/app/main.py:127 in load                │
                             │                                                 │
                             │   124 │   │   │   model.load()                  │
                             │   125 │                                         │
                             │   126 │   try:                                  │
                             │ ❱ 127 │   │   await run(_load, model)           │
                             │   128 │   │   return model                      │
                             │   129 │   except (OSError, InvalidProtobuf, Bad │
                             │   130 │   │   log.warning(                      │
                             │                                                 │
                             │ /usr/src/app/main.py:115 in run                 │
                             │                                                 │
                             │   112 async def run(func: Callable[..., Any], i │
                             │   113 │   if thread_pool is None:               │
                             │   114 │   │   return func(inputs)               │
                             │ ❱ 115 │   return await asyncio.get_running_loop │
                             │   116                                           │
                             │   117                                           │
                             │   118 async def load(model: InferenceModel) ->  │
                             │                                                 │
                             │ /usr/local/lib/python3.11/concurrent/futures/th │
                             │ read.py:58 in run                               │
                             │                                                 │
                             │ /usr/src/app/main.py:124 in _load               │
                             │                                                 │
                             │   121 │                                         │
                             │   122 │   def _load(model: InferenceModel) -> N │
                             │   123 │   │   with lock:                        │
                             │ ❱ 124 │   │   │   model.load()                  │
                             │   125 │                                         │
                             │   126 │   try:                                  │
                             │   127 │   │   await run(_load, model)           │
                             │                                                 │
                             │ /usr/src/app/models/base.py:53 in load          │
                             │                                                 │
                             │    50 │   def load(self) -> None:               │
                             │    51 │   │   if self.loaded:                   │
                             │    52 │   │   │   return                        │
                             │ ❱  53 │   │   self.download()                   │
                             │    54 │   │   log.info(f"Loading {self.model_ty │
                             │       to memory")                               │
                             │    55 │   │   self._load()                      │
                             │    56 │   │   self.loaded = True                │
                             │                                                 │
                             │ /usr/src/app/models/base.py:48 in download      │
                             │                                                 │
                             │    45 │   │   │   log.info(                     │
                             │    46 │   │   │   │   f"Downloading {self.model │
                             │       '{self.model_name}'. This may take a whil │
                             │    47 │   │   │   )                             │
                             │ ❱  48 │   │   │   self._download()              │
                             │    49 │                                         │
                             │    50 │   def load(self) -> None:               │
                             │    51 │   │   if self.loaded:                   │
                             │                                                 │
                             │ /usr/src/app/models/base.py:72 in _download     │
                             │                                                 │
                             │    69 │                                         │
                             │    70 │   def _download(self) -> None:          │
                             │    71 │   │   ignore_patterns = [] if self.pref │
                             │       ["*.armnn"]                               │
                             │ ❱  72 │   │   snapshot_download(                │
                             │    73 │   │   │   get_hf_model_name(self.model_ │
                             │    74 │   │   │   cache_dir=self.cache_dir,     │
                             │    75 │   │   │   local_dir=self.cache_dir,     │
                             │                                                 │
                             │ /opt/venv/lib/python3.11/site-packages/huggingf │
                             │ ace_hub/utils/_validators.py:118 in _inner_fn   │
                             │                                                 │
                             │   115 │   │   if check_use_auth_token:          │
                             │   116 │   │   │   kwargs = smoothly_deprecate_u │
                             │       has_token=has_token, kwargs=kwargs)       │
                             │   117 │   │                                     │
                             │ ❱ 118 │   │   return fn(*args, **kwargs)        │
                             │   119 │                                         │
                             │   120 │   return _inner_fn  # type: ignore      │
                             │   121                                           │
                             │                                                 │
                             │ /opt/venv/lib/python3.11/site-packages/huggingf │
                             │ ace_hub/_snapshot_download.py:275 in            │
                             │ snapshot_download                               │
                             │                                                 │
                             │   272 │   # In that case store a ref.           │
                             │   273 │   if revision != commit_hash:           │
                             │   274 │   │   ref_path = os.path.join(storage_f │
                             │ ❱ 275 │   │   os.makedirs(os.path.dirname(ref_p │
                             │   276 │   │   with open(ref_path, "w") as f:    │
                             │   277 │   │   │   f.write(commit_hash)          │
                             │   278                                           │
                             │ in makedirs:215                                 │
                             │ in makedirs:215                                 │
                             │ in makedirs:215                                 │
                             │ in makedirs:225                                 │
                             ╰─────────────────────────────────────────────────╯
                             PermissionError: [Errno 13] Permission denied:
                             '/cache/clip'
@codeicetea commented on GitHub (Feb 21, 2024): @alextran1502 logs: ``` [02/20/24 21:41:51] INFO Setting 'ViT-B-32__openai' execution providers to ['CPUExecutionProvider'], in descending order of preference [02/20/24 21:41:51] INFO Downloading clip model 'ViT-B-32__openai'. This may take a while. [02/20/24 21:41:51] WARNING Failed to load clip model 'ViT-B-32__openai'.Clearing cache and retrying. [02/20/24 21:41:51] WARNING Attempted to clear cache for model 'ViT-B-32__openai', but cache directory does not exist [02/20/24 21:41:51] INFO Downloading clip model 'ViT-B-32__openai'. This may take a while. [02/20/24 21:41:51] ERROR Exception in ASGI application ``` the ASGI exception summarizes with `PermissionError: [Errno 13] Permission denied: '/cache/clip'` Stacktrace for reference: ``` [02/20/24 21:41:51] ERROR Exception in ASGI application ╭─────── Traceback (most recent call last) ───────╮ │ /usr/src/app/main.py:127 in load │ │ │ │ 124 │ │ │ model.load() │ │ 125 │ │ │ 126 │ try: │ │ ❱ 127 │ │ await run(_load, model) │ │ 128 │ │ return model │ │ 129 │ except (OSError, InvalidProtobuf, Bad │ │ 130 │ │ log.warning( │ │ │ │ /usr/src/app/main.py:115 in run │ │ │ │ 112 async def run(func: Callable[..., Any], i │ │ 113 │ if thread_pool is None: │ │ 114 │ │ return func(inputs) │ │ ❱ 115 │ return await asyncio.get_running_loop │ │ 116 │ │ 117 │ │ 118 async def load(model: InferenceModel) -> │ │ │ │ /usr/local/lib/python3.11/concurrent/futures/th │ │ read.py:58 in run │ │ │ │ /usr/src/app/main.py:124 in _load │ │ │ │ 121 │ │ │ 122 │ def _load(model: InferenceModel) -> N │ │ 123 │ │ with lock: │ │ ❱ 124 │ │ │ model.load() │ │ 125 │ │ │ 126 │ try: │ │ 127 │ │ await run(_load, model) │ │ │ │ /usr/src/app/models/base.py:53 in load │ │ │ │ 50 │ def load(self) -> None: │ │ 51 │ │ if self.loaded: │ │ 52 │ │ │ return │ │ ❱ 53 │ │ self.download() │ │ 54 │ │ log.info(f"Loading {self.model_ty │ │ to memory") │ │ 55 │ │ self._load() │ │ 56 │ │ self.loaded = True │ │ │ │ /usr/src/app/models/base.py:48 in download │ │ │ │ 45 │ │ │ log.info( │ │ 46 │ │ │ │ f"Downloading {self.model │ │ '{self.model_name}'. This may take a whil │ │ 47 │ │ │ ) │ │ ❱ 48 │ │ │ self._download() │ │ 49 │ │ │ 50 │ def load(self) -> None: │ │ 51 │ │ if self.loaded: │ │ │ │ /usr/src/app/models/base.py:72 in _download │ │ │ │ 69 │ │ │ 70 │ def _download(self) -> None: │ │ 71 │ │ ignore_patterns = [] if self.pref │ │ ["*.armnn"] │ │ ❱ 72 │ │ snapshot_download( │ │ 73 │ │ │ get_hf_model_name(self.model_ │ │ 74 │ │ │ cache_dir=self.cache_dir, │ │ 75 │ │ │ local_dir=self.cache_dir, │ │ │ │ /opt/venv/lib/python3.11/site-packages/huggingf │ │ ace_hub/utils/_validators.py:118 in _inner_fn │ │ │ │ 115 │ │ if check_use_auth_token: │ │ 116 │ │ │ kwargs = smoothly_deprecate_u │ │ has_token=has_token, kwargs=kwargs) │ │ 117 │ │ │ │ ❱ 118 │ │ return fn(*args, **kwargs) │ │ 119 │ │ │ 120 │ return _inner_fn # type: ignore │ │ 121 │ │ │ │ /opt/venv/lib/python3.11/site-packages/huggingf │ │ ace_hub/_snapshot_download.py:275 in │ │ snapshot_download │ │ │ │ 272 │ # In that case store a ref. │ │ 273 │ if revision != commit_hash: │ │ 274 │ │ ref_path = os.path.join(storage_f │ │ ❱ 275 │ │ os.makedirs(os.path.dirname(ref_p │ │ 276 │ │ with open(ref_path, "w") as f: │ │ 277 │ │ │ f.write(commit_hash) │ │ 278 │ │ in makedirs:215 │ │ in makedirs:215 │ │ in makedirs:215 │ │ in makedirs:225 │ ╰─────────────────────────────────────────────────╯ PermissionError: [Errno 13] Permission denied: '/cache/clip' ```
Author
Owner

@codeicetea commented on GitHub (Feb 21, 2024):

uff I´m sorry not a bug - it was my mistake.
In order to migrate I forgot to adjust the permissions on the new volume mappings...

@codeicetea commented on GitHub (Feb 21, 2024): uff I´m sorry not a bug - it was my mistake. In order to migrate I forgot to adjust the permissions on the new volume mappings...
Author
Owner

@spupuz commented on GitHub (Feb 21, 2024):

@alextran1502 so seems to be relate to openvino, could be possible?

@spupuz commented on GitHub (Feb 21, 2024): @alextran1502 so seems to be relate to openvino, could be possible?
Author
Owner

@mertalev commented on GitHub (Feb 21, 2024):

Ok, good, so that's not the issue, you are on the latest 1.95.0. I'm unsure in that case why it is trying to run REINDEX, as I thought this update was supposed to run DROP INDEX / CREATE INDEX. @mertalev ? Seems somewhat similar to my issue but using the supported vectors container.

The code tries to run REINDEX and falls back to the DROP INDEX/CREATE INDEX approach if that fails. I think the Postgres log only shows the error for the former, making it seem like it didn't run the latter.

@mertalev commented on GitHub (Feb 21, 2024): > Ok, good, so that's not the issue, you are on the latest 1.95.0. I'm unsure in that case why it is trying to run REINDEX, as I thought this update was supposed to run DROP INDEX / CREATE INDEX. @mertalev ? Seems somewhat similar to my issue but using the supported vectors container. The code tries to run `REINDEX` and falls back to the `DROP INDEX`/`CREATE INDEX` approach if that fails. I think the Postgres log only shows the error for the former, making it seem like it didn't run the latter.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: immich-app/immich#2193