[BUG] Web client has major performance issues with large image library #1354

Closed
opened 2026-02-05 01:26:41 +03:00 by OVERLORD · 57 comments
Owner

Originally created by @rcdailey on GitHub (Sep 22, 2023).

Originally assigned to: @alextran1502 on GitHub.

The bug

Using Immich v1.79.1, I am observing the following issues:

  1. Extremely long load times when viewing the main timeline view
  2. A lot of stuttering/lag when scrolling vertically through the timeline

After installing using the docker-compose.yml provided in the installation docs, I introduced an external photo directory. Immich reports about 65,000 photos. I suspect the web client is trying to load information for all of these photos which would explain the symptoms. But I don't know for sure.

I have NOT uploaded any photos to Immich at this point. All I have done is added existing photos from a different mounted directory.

The OS that Immich Server is running on

Unraid (using docker compose)

Version of Immich Server

v1.79.1

Version of Immich Mobile App

N/A

Platform with the issue

  • Server
  • Web
  • Mobile

Your docker-compose.yml content

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    command: [ "start.sh", "immich" ]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /mnt/user/photos/external:/mnt/media/external:ro
    env_file:
      - .env
    depends_on:
      - redis
      - database
      - typesense
    restart: always

  immich-microservices:
    container_name: immich_microservices
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    # extends:
    #   file: hwaccel.yml
    #   service: hwaccel
    command: [ "start.sh", "microservices" ]
    volumes:
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /mnt/user/photos/external:/mnt/media/external:ro
    env_file:
      - .env
    depends_on:
      - redis
      - database
      - typesense
    restart: always

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    volumes:
      - ./model-cache:/cache
    env_file:
      - .env
    restart: always

  immich-web:
    container_name: immich_web
    image: ghcr.io/immich-app/immich-web:${IMMICH_VERSION:-release}
    env_file:
      - .env
    restart: always

  typesense:
    container_name: immich_typesense
    image: typesense/typesense:0.24.1@sha256:9bcff2b829f12074426ca044b56160ca9d777a0c488303469143dd9f8259d4dd
    environment:
      - TYPESENSE_API_KEY=${TYPESENSE_API_KEY}
      - TYPESENSE_DATA_DIR=/data
      # remove this to get debug messages
      - GLOG_minloglevel=1
    volumes:
      - ./tsdata:/data
    restart: always

  redis:
    container_name: immich_redis
    image: redis:6.2-alpine@sha256:70a7a5b641117670beae0d80658430853896b5ef269ccf00d1827427e3263fa3
    restart: always

  database:
    container_name: immich_postgres
    image: postgres:14-alpine@sha256:28407a9961e76f2d285dc6991e8e48893503cc3836a4755bbc2d40bcc272a441
    env_file:
      - .env
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
    volumes:
      - ./pgdata:/var/lib/postgresql/data
    restart: always

  immich-proxy:
    container_name: immich_proxy
    image: ghcr.io/immich-app/immich-proxy:${IMMICH_VERSION:-release}
    environment:
      # Make sure these values get passed through from the env file
      - IMMICH_SERVER_URL
      - IMMICH_WEB_URL
    ports:
      - 2283:8080
    depends_on:
      - immich-server
      - immich-web
    restart: always

Your .env content

# You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables

# The location where your uploaded files are stored
UPLOAD_LOCATION=/mnt/user/photos/immich

# The Immich version to use. You can pin this to a specific version like "v1.71.0"
IMMICH_VERSION=v1.79.1

# Connection secrets for postgres and typesense. You should change these to random passwords
TYPESENSE_API_KEY=snip
DB_PASSWORD=snip

# The values below this line do not need to be changed
###################################################################################
DB_HOSTNAME=immich_postgres
DB_USERNAME=postgres
DB_DATABASE_NAME=immich

REDIS_HOSTNAME=immich_redis

Reproduction steps

Explained in description.

Tested using:
- Microsoft Edge on Windows 10
- Microsoft Edge on Macbook Pro 2023
- Firefox on Windows 10

All scenarios had similar or identical symptoms.

Additional information

No response

Originally created by @rcdailey on GitHub (Sep 22, 2023). Originally assigned to: @alextran1502 on GitHub. ### The bug Using Immich v1.79.1, I am observing the following issues: 1. Extremely long load times when viewing the main timeline view 2. A lot of stuttering/lag when scrolling vertically through the timeline After installing using the `docker-compose.yml` provided in the installation docs, I introduced an external photo directory. Immich reports about 65,000 photos. I suspect the web client is trying to load information for *all* of these photos which would explain the symptoms. But I don't know for sure. I have NOT uploaded any photos to Immich at this point. All I have done is added existing photos from a different mounted directory. ### The OS that Immich Server is running on Unraid (using docker compose) ### Version of Immich Server v1.79.1 ### Version of Immich Mobile App N/A ### Platform with the issue - [ ] Server - [X] Web - [ ] Mobile ### Your docker-compose.yml content ```YAML services: immich-server: container_name: immich_server image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release} command: [ "start.sh", "immich" ] volumes: - ${UPLOAD_LOCATION}:/usr/src/app/upload - /mnt/user/photos/external:/mnt/media/external:ro env_file: - .env depends_on: - redis - database - typesense restart: always immich-microservices: container_name: immich_microservices image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release} # extends: # file: hwaccel.yml # service: hwaccel command: [ "start.sh", "microservices" ] volumes: - ${UPLOAD_LOCATION}:/usr/src/app/upload - /mnt/user/photos/external:/mnt/media/external:ro env_file: - .env depends_on: - redis - database - typesense restart: always immich-machine-learning: container_name: immich_machine_learning image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release} volumes: - ./model-cache:/cache env_file: - .env restart: always immich-web: container_name: immich_web image: ghcr.io/immich-app/immich-web:${IMMICH_VERSION:-release} env_file: - .env restart: always typesense: container_name: immich_typesense image: typesense/typesense:0.24.1@sha256:9bcff2b829f12074426ca044b56160ca9d777a0c488303469143dd9f8259d4dd environment: - TYPESENSE_API_KEY=${TYPESENSE_API_KEY} - TYPESENSE_DATA_DIR=/data # remove this to get debug messages - GLOG_minloglevel=1 volumes: - ./tsdata:/data restart: always redis: container_name: immich_redis image: redis:6.2-alpine@sha256:70a7a5b641117670beae0d80658430853896b5ef269ccf00d1827427e3263fa3 restart: always database: container_name: immich_postgres image: postgres:14-alpine@sha256:28407a9961e76f2d285dc6991e8e48893503cc3836a4755bbc2d40bcc272a441 env_file: - .env environment: POSTGRES_PASSWORD: ${DB_PASSWORD} POSTGRES_USER: ${DB_USERNAME} POSTGRES_DB: ${DB_DATABASE_NAME} volumes: - ./pgdata:/var/lib/postgresql/data restart: always immich-proxy: container_name: immich_proxy image: ghcr.io/immich-app/immich-proxy:${IMMICH_VERSION:-release} environment: # Make sure these values get passed through from the env file - IMMICH_SERVER_URL - IMMICH_WEB_URL ports: - 2283:8080 depends_on: - immich-server - immich-web restart: always ``` ### Your .env content ```Shell # You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables # The location where your uploaded files are stored UPLOAD_LOCATION=/mnt/user/photos/immich # The Immich version to use. You can pin this to a specific version like "v1.71.0" IMMICH_VERSION=v1.79.1 # Connection secrets for postgres and typesense. You should change these to random passwords TYPESENSE_API_KEY=snip DB_PASSWORD=snip # The values below this line do not need to be changed ################################################################################### DB_HOSTNAME=immich_postgres DB_USERNAME=postgres DB_DATABASE_NAME=immich REDIS_HOSTNAME=immich_redis ``` ### Reproduction steps ```bash Explained in description. Tested using: - Microsoft Edge on Windows 10 - Microsoft Edge on Macbook Pro 2023 - Firefox on Windows 10 All scenarios had similar or identical symptoms. ``` ### Additional information _No response_
Author
Owner

@jrasm91 commented on GitHub (Sep 23, 2023):

It definitely does not load everything for the main timeline. Can you record a video of what you are seeing? You can DM it to me on discord if needed.

@jrasm91 commented on GitHub (Sep 23, 2023): It definitely does not load everything for the main timeline. Can you record a video of what you are seeing? You can DM it to me on discord if needed.
Author
Owner

@rcdailey commented on GitHub (Sep 23, 2023):

I can't take a video in this case, there's just too many personal photos of mine that would be visible and I'm not comfortable sharing that. I don't know if it's helpful but I can show you this from the Networking tab of dev tools:

image

It clearly shows which requests are taking the longest

@rcdailey commented on GitHub (Sep 23, 2023): I can't take a video in this case, there's just too many personal photos of mine that would be visible and I'm not comfortable sharing that. I don't know if it's helpful but I can show you this from the Networking tab of dev tools: ![image](https://github.com/immich-app/immich/assets/1768054/9a96b6a7-5afa-4638-b2ca-8fab0fdc9fbc) It clearly shows which requests are taking the longest
Author
Owner

@alextran1502 commented on GitHub (Sep 23, 2023):

Can you click on that large request and get the request content body? Did you get the same issue on previous version?

My guess is that you have a very large collection in the month of Dec 2022, is that the correct observation?

@alextran1502 commented on GitHub (Sep 23, 2023): Can you click on that large request and get the request content body? Did you get the same issue on previous version? My guess is that you have a very large collection in the month of Dec 2022, is that the correct observation?
Author
Owner

@rcdailey commented on GitHub (Sep 23, 2023):

The response was so big that the inspector evicted the data. I used Postman to send the request with an API Key and the total size of the JSON data returned is 60MB!

I can't even pretty format it in VS Code without causing it to crash.

How can this not be the whole library of 65k images?

@rcdailey commented on GitHub (Sep 23, 2023): The response was so big that the inspector evicted the data. I used Postman to send the request with an API Key and the total size of the JSON data returned is 60MB! I can't even pretty format it in VS Code without causing it to crash. How can this *not* be the whole library of 65k images?
Author
Owner

@rcdailey commented on GitHub (Sep 23, 2023):

My guess is that you have a very large collection in the month of Dec 2022, is that the correct observation?

Based on the little timeline view on the right side, November 2022 appears to be very large. I don't know why the photos are all grouped under that 1 month but I think that's wrong. Maybe the metadata in the image files is wrong.

@rcdailey commented on GitHub (Sep 23, 2023): > My guess is that you have a very large collection in the month of Dec 2022, is that the correct observation? Based on the little timeline view on the right side, November 2022 appears to be very large. I don't know why the photos are all grouped under that 1 month but I think that's wrong. Maybe the metadata in the image files is wrong.
Author
Owner

@alextran1502 commented on GitHub (Sep 23, 2023):

When the page finally loaded, do you see all images are grouped under that one month?

@alextran1502 commented on GitHub (Sep 23, 2023): When the page finally loaded, do you see all images are grouped under that one month?
Author
Owner

@jrasm91 commented on GitHub (Sep 23, 2023):

There are two time bucket endpoints. One returns after details by month, that is the very large one you are seeing. The other one returns counts by month. Can you try to capture that one? It should be much smaller and have more details about the distribution of photos

@jrasm91 commented on GitHub (Sep 23, 2023): There are two time bucket endpoints. One returns after details by month, that is the very large one you are seeing. The other one returns counts by month. Can you try to capture that one? It should be much smaller and have more details about the distribution of photos
Author
Owner

@rcdailey commented on GitHub (Sep 23, 2023):

There are two time bucket endpoints. One returns after details by month, that is the very large one you are seeing. The other one returns counts by month. Can you try to capture that one? It should be much smaller and have more details about the distribution of photos

[
    {
        "count": 52851,
        "timeBucket": "2022-12-01T00:00:00.000Z"
    },
    {
        "count": 397,
        "timeBucket": "2018-10-01T00:00:00.000Z"
    },
    {
        "count": 11,
        "timeBucket": "2016-10-01T00:00:00.000Z"
    },
    {
        "count": 62,
        "timeBucket": "2016-09-01T00:00:00.000Z"
    },
    {
        "count": 65,
        "timeBucket": "2016-08-01T00:00:00.000Z"
    },
    {
        "count": 68,
        "timeBucket": "2016-07-01T00:00:00.000Z"
    },
    {
        "count": 23,
        "timeBucket": "2016-06-01T00:00:00.000Z"
    },
    {
        "count": 41,
        "timeBucket": "2016-05-01T00:00:00.000Z"
    },
    {
        "count": 40,
        "timeBucket": "2016-04-01T00:00:00.000Z"
    },
    {
        "count": 33,
        "timeBucket": "2016-03-01T00:00:00.000Z"
    },
    {
        "count": 172,
        "timeBucket": "2016-02-01T00:00:00.000Z"
    },
    {
        "count": 13,
        "timeBucket": "2016-01-01T00:00:00.000Z"
    },
    {
        "count": 569,
        "timeBucket": "2015-12-01T00:00:00.000Z"
    },
    {
        "count": 34,
        "timeBucket": "2015-11-01T00:00:00.000Z"
    },
    {
        "count": 30,
        "timeBucket": "2015-10-01T00:00:00.000Z"
    },
    {
        "count": 81,
        "timeBucket": "2015-09-01T00:00:00.000Z"
    },
    {
        "count": 117,
        "timeBucket": "2015-08-01T00:00:00.000Z"
    },
    {
        "count": 52,
        "timeBucket": "2015-07-01T00:00:00.000Z"
    },
    {
        "count": 12,
        "timeBucket": "2015-06-01T00:00:00.000Z"
    },
    {
        "count": 76,
        "timeBucket": "2015-05-01T00:00:00.000Z"
    },
    {
        "count": 254,
        "timeBucket": "2015-04-01T00:00:00.000Z"
    },
    {
        "count": 18,
        "timeBucket": "2015-03-01T00:00:00.000Z"
    },
    {
        "count": 26,
        "timeBucket": "2015-02-01T00:00:00.000Z"
    },
    {
        "count": 12,
        "timeBucket": "2014-12-01T00:00:00.000Z"
    },
    {
        "count": 6,
        "timeBucket": "2014-11-01T00:00:00.000Z"
    },
    {
        "count": 10,
        "timeBucket": "2014-10-01T00:00:00.000Z"
    },
    {
        "count": 4,
        "timeBucket": "2014-09-01T00:00:00.000Z"
    },
    {
        "count": 8,
        "timeBucket": "2014-07-01T00:00:00.000Z"
    },
    {
        "count": 4,
        "timeBucket": "2014-06-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2014-05-01T00:00:00.000Z"
    },
    {
        "count": 68,
        "timeBucket": "2014-04-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2014-03-01T00:00:00.000Z"
    },
    {
        "count": 58,
        "timeBucket": "2014-02-01T00:00:00.000Z"
    },
    {
        "count": 94,
        "timeBucket": "2014-01-01T00:00:00.000Z"
    },
    {
        "count": 990,
        "timeBucket": "2013-12-01T00:00:00.000Z"
    },
    {
        "count": 159,
        "timeBucket": "2013-11-01T00:00:00.000Z"
    },
    {
        "count": 183,
        "timeBucket": "2013-10-01T00:00:00.000Z"
    },
    {
        "count": 191,
        "timeBucket": "2013-09-01T00:00:00.000Z"
    },
    {
        "count": 283,
        "timeBucket": "2013-08-01T00:00:00.000Z"
    },
    {
        "count": 93,
        "timeBucket": "2013-07-01T00:00:00.000Z"
    },
    {
        "count": 15,
        "timeBucket": "2013-06-01T00:00:00.000Z"
    },
    {
        "count": 14,
        "timeBucket": "2013-05-01T00:00:00.000Z"
    },
    {
        "count": 164,
        "timeBucket": "2013-04-01T00:00:00.000Z"
    },
    {
        "count": 31,
        "timeBucket": "2013-03-01T00:00:00.000Z"
    },
    {
        "count": 80,
        "timeBucket": "2013-02-01T00:00:00.000Z"
    },
    {
        "count": 300,
        "timeBucket": "2013-01-01T00:00:00.000Z"
    },
    {
        "count": 410,
        "timeBucket": "2012-12-01T00:00:00.000Z"
    },
    {
        "count": 144,
        "timeBucket": "2012-11-01T00:00:00.000Z"
    },
    {
        "count": 48,
        "timeBucket": "2012-10-01T00:00:00.000Z"
    },
    {
        "count": 26,
        "timeBucket": "2012-09-01T00:00:00.000Z"
    },
    {
        "count": 48,
        "timeBucket": "2012-05-01T00:00:00.000Z"
    },
    {
        "count": 364,
        "timeBucket": "2012-04-01T00:00:00.000Z"
    },
    {
        "count": 736,
        "timeBucket": "2012-03-01T00:00:00.000Z"
    },
    {
        "count": 168,
        "timeBucket": "2012-02-01T00:00:00.000Z"
    },
    {
        "count": 6,
        "timeBucket": "2012-01-01T00:00:00.000Z"
    },
    {
        "count": 288,
        "timeBucket": "2011-12-01T00:00:00.000Z"
    },
    {
        "count": 116,
        "timeBucket": "2011-11-01T00:00:00.000Z"
    },
    {
        "count": 532,
        "timeBucket": "2011-10-01T00:00:00.000Z"
    },
    {
        "count": 12,
        "timeBucket": "2011-09-01T00:00:00.000Z"
    },
    {
        "count": 64,
        "timeBucket": "2011-08-01T00:00:00.000Z"
    },
    {
        "count": 148,
        "timeBucket": "2011-07-01T00:00:00.000Z"
    },
    {
        "count": 20,
        "timeBucket": "2011-06-01T00:00:00.000Z"
    },
    {
        "count": 12,
        "timeBucket": "2011-05-01T00:00:00.000Z"
    },
    {
        "count": 411,
        "timeBucket": "2011-04-01T00:00:00.000Z"
    },
    {
        "count": 230,
        "timeBucket": "2011-03-01T00:00:00.000Z"
    },
    {
        "count": 238,
        "timeBucket": "2011-02-01T00:00:00.000Z"
    },
    {
        "count": 41,
        "timeBucket": "2011-01-01T00:00:00.000Z"
    },
    {
        "count": 370,
        "timeBucket": "2010-12-01T00:00:00.000Z"
    },
    {
        "count": 142,
        "timeBucket": "2010-11-01T00:00:00.000Z"
    },
    {
        "count": 46,
        "timeBucket": "2010-10-01T00:00:00.000Z"
    },
    {
        "count": 12,
        "timeBucket": "2010-09-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2010-08-01T00:00:00.000Z"
    },
    {
        "count": 6,
        "timeBucket": "2010-06-01T00:00:00.000Z"
    },
    {
        "count": 586,
        "timeBucket": "2010-05-01T00:00:00.000Z"
    },
    {
        "count": 4,
        "timeBucket": "2010-04-01T00:00:00.000Z"
    },
    {
        "count": 68,
        "timeBucket": "2010-03-01T00:00:00.000Z"
    },
    {
        "count": 22,
        "timeBucket": "2010-02-01T00:00:00.000Z"
    },
    {
        "count": 266,
        "timeBucket": "2010-01-01T00:00:00.000Z"
    },
    {
        "count": 114,
        "timeBucket": "2009-12-01T00:00:00.000Z"
    },
    {
        "count": 6,
        "timeBucket": "2009-10-01T00:00:00.000Z"
    },
    {
        "count": 130,
        "timeBucket": "2009-09-01T00:00:00.000Z"
    },
    {
        "count": 84,
        "timeBucket": "2009-08-01T00:00:00.000Z"
    },
    {
        "count": 84,
        "timeBucket": "2009-07-01T00:00:00.000Z"
    },
    {
        "count": 134,
        "timeBucket": "2009-05-01T00:00:00.000Z"
    },
    {
        "count": 144,
        "timeBucket": "2009-04-01T00:00:00.000Z"
    },
    {
        "count": 117,
        "timeBucket": "2009-03-01T00:00:00.000Z"
    },
    {
        "count": 320,
        "timeBucket": "2009-02-01T00:00:00.000Z"
    },
    {
        "count": 303,
        "timeBucket": "2009-01-01T00:00:00.000Z"
    },
    {
        "count": 164,
        "timeBucket": "2008-12-01T00:00:00.000Z"
    },
    {
        "count": 166,
        "timeBucket": "2008-11-01T00:00:00.000Z"
    },
    {
        "count": 16,
        "timeBucket": "2008-10-01T00:00:00.000Z"
    },
    {
        "count": 32,
        "timeBucket": "2008-09-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2008-07-01T00:00:00.000Z"
    },
    {
        "count": 216,
        "timeBucket": "2008-06-01T00:00:00.000Z"
    },
    {
        "count": 8,
        "timeBucket": "2008-05-01T00:00:00.000Z"
    },
    {
        "count": 20,
        "timeBucket": "2008-04-01T00:00:00.000Z"
    },
    {
        "count": 8,
        "timeBucket": "2008-02-01T00:00:00.000Z"
    },
    {
        "count": 14,
        "timeBucket": "2008-01-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2007-12-01T00:00:00.000Z"
    },
    {
        "count": 6,
        "timeBucket": "2007-11-01T00:00:00.000Z"
    },
    {
        "count": 24,
        "timeBucket": "2007-09-01T00:00:00.000Z"
    },
    {
        "count": 16,
        "timeBucket": "2007-07-01T00:00:00.000Z"
    },
    {
        "count": 12,
        "timeBucket": "2007-06-01T00:00:00.000Z"
    },
    {
        "count": 18,
        "timeBucket": "2007-05-01T00:00:00.000Z"
    },
    {
        "count": 26,
        "timeBucket": "2007-04-01T00:00:00.000Z"
    },
    {
        "count": 180,
        "timeBucket": "2007-01-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2006-12-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2006-09-01T00:00:00.000Z"
    },
    {
        "count": 2,
        "timeBucket": "2006-01-01T00:00:00.000Z"
    },
    {
        "count": 1,
        "timeBucket": "1969-12-01T00:00:00.000Z"
    }
]
@rcdailey commented on GitHub (Sep 23, 2023): > There are two time bucket endpoints. One returns after details by month, that is the very large one you are seeing. The other one returns counts by month. Can you try to capture that one? It should be much smaller and have more details about the distribution of photos ```json [ { "count": 52851, "timeBucket": "2022-12-01T00:00:00.000Z" }, { "count": 397, "timeBucket": "2018-10-01T00:00:00.000Z" }, { "count": 11, "timeBucket": "2016-10-01T00:00:00.000Z" }, { "count": 62, "timeBucket": "2016-09-01T00:00:00.000Z" }, { "count": 65, "timeBucket": "2016-08-01T00:00:00.000Z" }, { "count": 68, "timeBucket": "2016-07-01T00:00:00.000Z" }, { "count": 23, "timeBucket": "2016-06-01T00:00:00.000Z" }, { "count": 41, "timeBucket": "2016-05-01T00:00:00.000Z" }, { "count": 40, "timeBucket": "2016-04-01T00:00:00.000Z" }, { "count": 33, "timeBucket": "2016-03-01T00:00:00.000Z" }, { "count": 172, "timeBucket": "2016-02-01T00:00:00.000Z" }, { "count": 13, "timeBucket": "2016-01-01T00:00:00.000Z" }, { "count": 569, "timeBucket": "2015-12-01T00:00:00.000Z" }, { "count": 34, "timeBucket": "2015-11-01T00:00:00.000Z" }, { "count": 30, "timeBucket": "2015-10-01T00:00:00.000Z" }, { "count": 81, "timeBucket": "2015-09-01T00:00:00.000Z" }, { "count": 117, "timeBucket": "2015-08-01T00:00:00.000Z" }, { "count": 52, "timeBucket": "2015-07-01T00:00:00.000Z" }, { "count": 12, "timeBucket": "2015-06-01T00:00:00.000Z" }, { "count": 76, "timeBucket": "2015-05-01T00:00:00.000Z" }, { "count": 254, "timeBucket": "2015-04-01T00:00:00.000Z" }, { "count": 18, "timeBucket": "2015-03-01T00:00:00.000Z" }, { "count": 26, "timeBucket": "2015-02-01T00:00:00.000Z" }, { "count": 12, "timeBucket": "2014-12-01T00:00:00.000Z" }, { "count": 6, "timeBucket": "2014-11-01T00:00:00.000Z" }, { "count": 10, "timeBucket": "2014-10-01T00:00:00.000Z" }, { "count": 4, "timeBucket": "2014-09-01T00:00:00.000Z" }, { "count": 8, "timeBucket": "2014-07-01T00:00:00.000Z" }, { "count": 4, "timeBucket": "2014-06-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2014-05-01T00:00:00.000Z" }, { "count": 68, "timeBucket": "2014-04-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2014-03-01T00:00:00.000Z" }, { "count": 58, "timeBucket": "2014-02-01T00:00:00.000Z" }, { "count": 94, "timeBucket": "2014-01-01T00:00:00.000Z" }, { "count": 990, "timeBucket": "2013-12-01T00:00:00.000Z" }, { "count": 159, "timeBucket": "2013-11-01T00:00:00.000Z" }, { "count": 183, "timeBucket": "2013-10-01T00:00:00.000Z" }, { "count": 191, "timeBucket": "2013-09-01T00:00:00.000Z" }, { "count": 283, "timeBucket": "2013-08-01T00:00:00.000Z" }, { "count": 93, "timeBucket": "2013-07-01T00:00:00.000Z" }, { "count": 15, "timeBucket": "2013-06-01T00:00:00.000Z" }, { "count": 14, "timeBucket": "2013-05-01T00:00:00.000Z" }, { "count": 164, "timeBucket": "2013-04-01T00:00:00.000Z" }, { "count": 31, "timeBucket": "2013-03-01T00:00:00.000Z" }, { "count": 80, "timeBucket": "2013-02-01T00:00:00.000Z" }, { "count": 300, "timeBucket": "2013-01-01T00:00:00.000Z" }, { "count": 410, "timeBucket": "2012-12-01T00:00:00.000Z" }, { "count": 144, "timeBucket": "2012-11-01T00:00:00.000Z" }, { "count": 48, "timeBucket": "2012-10-01T00:00:00.000Z" }, { "count": 26, "timeBucket": "2012-09-01T00:00:00.000Z" }, { "count": 48, "timeBucket": "2012-05-01T00:00:00.000Z" }, { "count": 364, "timeBucket": "2012-04-01T00:00:00.000Z" }, { "count": 736, "timeBucket": "2012-03-01T00:00:00.000Z" }, { "count": 168, "timeBucket": "2012-02-01T00:00:00.000Z" }, { "count": 6, "timeBucket": "2012-01-01T00:00:00.000Z" }, { "count": 288, "timeBucket": "2011-12-01T00:00:00.000Z" }, { "count": 116, "timeBucket": "2011-11-01T00:00:00.000Z" }, { "count": 532, "timeBucket": "2011-10-01T00:00:00.000Z" }, { "count": 12, "timeBucket": "2011-09-01T00:00:00.000Z" }, { "count": 64, "timeBucket": "2011-08-01T00:00:00.000Z" }, { "count": 148, "timeBucket": "2011-07-01T00:00:00.000Z" }, { "count": 20, "timeBucket": "2011-06-01T00:00:00.000Z" }, { "count": 12, "timeBucket": "2011-05-01T00:00:00.000Z" }, { "count": 411, "timeBucket": "2011-04-01T00:00:00.000Z" }, { "count": 230, "timeBucket": "2011-03-01T00:00:00.000Z" }, { "count": 238, "timeBucket": "2011-02-01T00:00:00.000Z" }, { "count": 41, "timeBucket": "2011-01-01T00:00:00.000Z" }, { "count": 370, "timeBucket": "2010-12-01T00:00:00.000Z" }, { "count": 142, "timeBucket": "2010-11-01T00:00:00.000Z" }, { "count": 46, "timeBucket": "2010-10-01T00:00:00.000Z" }, { "count": 12, "timeBucket": "2010-09-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2010-08-01T00:00:00.000Z" }, { "count": 6, "timeBucket": "2010-06-01T00:00:00.000Z" }, { "count": 586, "timeBucket": "2010-05-01T00:00:00.000Z" }, { "count": 4, "timeBucket": "2010-04-01T00:00:00.000Z" }, { "count": 68, "timeBucket": "2010-03-01T00:00:00.000Z" }, { "count": 22, "timeBucket": "2010-02-01T00:00:00.000Z" }, { "count": 266, "timeBucket": "2010-01-01T00:00:00.000Z" }, { "count": 114, "timeBucket": "2009-12-01T00:00:00.000Z" }, { "count": 6, "timeBucket": "2009-10-01T00:00:00.000Z" }, { "count": 130, "timeBucket": "2009-09-01T00:00:00.000Z" }, { "count": 84, "timeBucket": "2009-08-01T00:00:00.000Z" }, { "count": 84, "timeBucket": "2009-07-01T00:00:00.000Z" }, { "count": 134, "timeBucket": "2009-05-01T00:00:00.000Z" }, { "count": 144, "timeBucket": "2009-04-01T00:00:00.000Z" }, { "count": 117, "timeBucket": "2009-03-01T00:00:00.000Z" }, { "count": 320, "timeBucket": "2009-02-01T00:00:00.000Z" }, { "count": 303, "timeBucket": "2009-01-01T00:00:00.000Z" }, { "count": 164, "timeBucket": "2008-12-01T00:00:00.000Z" }, { "count": 166, "timeBucket": "2008-11-01T00:00:00.000Z" }, { "count": 16, "timeBucket": "2008-10-01T00:00:00.000Z" }, { "count": 32, "timeBucket": "2008-09-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2008-07-01T00:00:00.000Z" }, { "count": 216, "timeBucket": "2008-06-01T00:00:00.000Z" }, { "count": 8, "timeBucket": "2008-05-01T00:00:00.000Z" }, { "count": 20, "timeBucket": "2008-04-01T00:00:00.000Z" }, { "count": 8, "timeBucket": "2008-02-01T00:00:00.000Z" }, { "count": 14, "timeBucket": "2008-01-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2007-12-01T00:00:00.000Z" }, { "count": 6, "timeBucket": "2007-11-01T00:00:00.000Z" }, { "count": 24, "timeBucket": "2007-09-01T00:00:00.000Z" }, { "count": 16, "timeBucket": "2007-07-01T00:00:00.000Z" }, { "count": 12, "timeBucket": "2007-06-01T00:00:00.000Z" }, { "count": 18, "timeBucket": "2007-05-01T00:00:00.000Z" }, { "count": 26, "timeBucket": "2007-04-01T00:00:00.000Z" }, { "count": 180, "timeBucket": "2007-01-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2006-12-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2006-09-01T00:00:00.000Z" }, { "count": 2, "timeBucket": "2006-01-01T00:00:00.000Z" }, { "count": 1, "timeBucket": "1969-12-01T00:00:00.000Z" } ] ```
Author
Owner

@alextran1502 commented on GitHub (Sep 24, 2023):

I suspect this is related to #4191, which might fix this in the next release. When the next release is out, can you create a new instance and import the photos again?

@alextran1502 commented on GitHub (Sep 24, 2023): I suspect this is related to #4191, which might fix this in the next release. When the next release is out, can you create a new instance and import the photos again?
Author
Owner

@rcdailey commented on GitHub (Sep 24, 2023):

I'd be more than happy to try this again. I appreciate everyone's attention to this!

@rcdailey commented on GitHub (Sep 24, 2023): I'd be more than happy to try this again. I appreciate everyone's attention to this!
Author
Owner

@JosiahBull commented on GitHub (Sep 26, 2023):

From my current understanding #4191 won't fix this issue? What if a user legitimately has several thousand photos on a single day, or even from a single hour?

A common instance I can think of is a user who copies a Gphotos library over without applying exif - all items will be on the same date. There's needs to be some item-count based batching instead of date-based batching to resolve this permanently.

@JosiahBull commented on GitHub (Sep 26, 2023): From my current understanding #4191 won't fix this issue? What if a user legitimately has several thousand photos on a single day, or even from a single hour? A common instance I can think of is a user who copies a Gphotos library over without applying exif - all items will be on the same date. There's needs to be some item-count based batching instead of date-based batching to resolve this permanently.
Author
Owner

@jrasm91 commented on GitHub (Sep 26, 2023):

Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user.

@jrasm91 commented on GitHub (Sep 26, 2023): Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user.
Author
Owner

@jrasm91 commented on GitHub (Sep 26, 2023):

As far as handling the specific situation where an import or similar causes this situation, I think it would be acceptable to detect the situation, truncate the results to some reasonable amount and show the user a warning about the situation. Ideally they can correct the exif and then continue to use the system as normal.

We've not designed it to work with this volume of photos and it would require quite a bit of work to change that. Unless there are some real life use cases for it I think we can leave it as is for now.

@jrasm91 commented on GitHub (Sep 26, 2023): As far as handling the specific situation where an import or similar causes this situation, I think it would be acceptable to detect the situation, truncate the results to some reasonable amount and show the user a warning about the situation. Ideally they can correct the exif and then continue to use the system as normal. We've not designed it to work with this volume of photos and it would require quite a bit of work to change that. Unless there are some real life use cases for it I think we can leave it as is for now.
Author
Owner

@curtwagner1984 commented on GitHub (Sep 26, 2023):

Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user.

If you're a photographer and you take hundreds/thousands of pictures a day, it's more than feasible that you'll have that number of images per month.

Regardless of the use case, from my understanding, if the user has 5k images that are dated at the same month, the Web App will try to load all of those images? If that's true, then it seems it might be a design issue, regardless of the use case.

Perhaps the loaded images should be per amount of loaded images regardless of the month. And when you scroll down more images are loaded. (from the same month, if that particular month had a lot of images). This way, this issue will never occur, regardless of the amount of images per month.

It's unreasonable for the app to freeze if it encounters a month with a lot of photos. This makes the web app unusable for users who encountered this issue.

I'm also usure if it's feasible to fix the exif on 5k images, the dates of the photos might be unbailable or never recorded.

@curtwagner1984 commented on GitHub (Sep 26, 2023): > Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user. If you're a photographer and you take hundreds/thousands of pictures a day, it's more than feasible that you'll have that number of images per month. Regardless of the use case, from my understanding, if the user has 5k images that are dated at the same month, the Web App will try to load **all** of those images? If that's true, then it seems it might be a design issue, regardless of the use case. Perhaps the loaded images should be per amount of loaded images regardless of the month. And when you scroll down more images are loaded. (from the same month, if that particular month had a lot of images). This way, this issue will never occur, regardless of the amount of images per month. It's unreasonable for the app to freeze if it encounters a month with a lot of photos. This makes the web app unusable for users who encountered this issue. I'm also usure if it's feasible to fix the exif on 5k images, the dates of the photos might be unbailable or never recorded.
Author
Owner

@jrasm91 commented on GitHub (Sep 26, 2023):

I think it's great that Immich even exists at all. I also think that it is OK, that it doesn't service every group of users out the gate. If you want to call that a "design issue", that's up to you.

The group of users it does service, it does so really well. Unfortunately, the 5k+ / month professional photographers isn't currently in that category. Fortunately, I have yet to hear of one that is running into this limitation in the first place.

Assets are grouped by month in part so that the virtual scrollbar has some semi-accurate spacing and the justified layout can be calculated for the time period. Neither of those features (justified layout or virtual scrollbar) can exist very easily when you shift to a "per amount" strategy.

There is also the possibility to explore loading statistics per day and load groups of photos per day. I don't know if anybody wants to work on this though. Priorities on an open source project are a bit different since people work on what is interesting to them and/or what issues current users are facing.

@jrasm91 commented on GitHub (Sep 26, 2023): I think it's great that Immich even exists at all. I also think that it is OK, that it doesn't service every group of users out the gate. If you want to call that a "design issue", that's up to you. The group of users it does service, it does so really well. Unfortunately, the 5k+ / month professional photographers isn't currently in that category. Fortunately, I have yet to hear of one that is running into this limitation in the first place. Assets are grouped by month in part so that the virtual scrollbar has some semi-accurate spacing and the justified layout can be calculated for the time period. Neither of those features (justified layout or virtual scrollbar) can exist very easily when you shift to a "per amount" strategy. There is also the possibility to explore loading statistics _per day_ and load groups of photos _per day_. I don't know if anybody wants to work on this though. Priorities on an open source project are a bit different since people work on what is interesting to them and/or what issues current users are facing.
Author
Owner

@rcdailey commented on GitHub (Sep 26, 2023):

Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user.

I want to clarify that the results are intended, or at least, not something I'm willing or able to fix. Most of these photos were scanned in IIRC, so the date information probably isn't right. In either case, fixing 65k images is out of the question for me.

it seems it might be a design issue, regardless of the use case [...] It's unreasonable for the app to freeze if it encounters a month with a lot of photos. This makes the web app unusable for users who encountered this issue. I'm also usure if it's feasible to fix the exif on 5k images, the dates of the photos might be unbailable or never recorded.

I agree with all of these points. It's unfortunate that I'm apparently a corner case here and because of that there's no apparent motivation to fix this. I was excited to use Immich but I'll apparently have to continue using Google Photos or find some other alternative that properly handles my situation.

@rcdailey commented on GitHub (Sep 26, 2023): > Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user. I want to clarify that the results are intended, or at least, not something I'm willing or able to fix. Most of these photos were scanned in IIRC, so the date information probably isn't right. In either case, fixing 65k images is out of the question for me. > it seems it might be a design issue, regardless of the use case [...] It's unreasonable for the app to freeze if it encounters a month with a lot of photos. This makes the web app unusable for users who encountered this issue. I'm also usure if it's feasible to fix the exif on 5k images, the dates of the photos might be unbailable or never recorded. I agree with all of these points. It's unfortunate that I'm apparently a corner case here and because of that there's no apparent motivation to fix this. I was excited to use Immich but I'll apparently have to continue using Google Photos or find some other alternative that properly handles my situation.
Author
Owner

@alextran1502 commented on GitHub (Sep 26, 2023):

I think we have option to group and fetch date bucket by date instead of by month, correct?

@alextran1502 commented on GitHub (Sep 26, 2023): I think we have option to group and fetch date bucket by date instead of by month, correct?
Author
Owner

@jrasm91 commented on GitHub (Sep 26, 2023):

@rcdailey - we can certainly look into this if it is a blocker for you. I assumed, perhaps incorrectly that you would look into using exiftool or another solution to correct the dates for those images. Exiftool can bulk update metadata following a pattern.

@jrasm91 commented on GitHub (Sep 26, 2023): @rcdailey - we can certainly look into this if it is a blocker for you. I assumed, perhaps incorrectly that you would look into using exiftool or another solution to correct the dates for those images. Exiftool can bulk update metadata following a pattern.
Author
Owner

@curtwagner1984 commented on GitHub (Sep 26, 2023):

I think it's great that Immich even exists at all. I also think that it is OK, that it doesn't service every group of users out the gate. If you want to call that a "design issue", that's up to you.

First and foremost, I'm genuinely appreciative of Immich's existence and the contributions everyone has made to it. I'm in complete agreement that it's okay for a software to not cater to every user group immediately. My mention of a "design issue" wasn't aimed at critiquing this aspect.

The core of my concern is the application's handling of unexpected volumes of input. If Immich is designed to handle, let's say, X images at once, encountering a scenario with more than X images should be managed gracefully. One possible solution could be limiting the image requests to X and providing a user with a message for the surplus (e.g., 5 images not loaded). This threshold, X, could perhaps even be user-configurable based on individual hardware capacities.

Addressing your point about assets being grouped by month for the virtual scrollbar and justified layout: might it be possible to simply deactivate the virtual scrollbar in situations where the image volume of a month necessitates batching? In my perspective, having Immich operate without the virtual scrollbar is far more preferable than having it hang to the point of tab closure.

Regarding loading images by day, I'm doubtful this will resolve the issue, as you pointed out, a large batch of images imported/uploaded without Exif data would be treated as coming from the same day.

To wrap up, I'd like to emphasize that my intention is never to belittle or discredit the effort that's gone into Immich. When highlighting what I see as a design challenge, it's purely from a constructive feedback standpoint and in no way a critique of the project's overall value or the dedication of its contributors.

@curtwagner1984 commented on GitHub (Sep 26, 2023): > I think it's great that Immich even exists at all. I also think that it is OK, that it doesn't service every group of users out the gate. If you want to call that a "design issue", that's up to you. First and foremost, I'm genuinely appreciative of Immich's existence and the contributions everyone has made to it. I'm in complete agreement that it's okay for a software to not cater to every user group immediately. My mention of a "design issue" wasn't aimed at critiquing this aspect. The core of my concern is the application's handling of unexpected volumes of input. If Immich is designed to handle, let's say, X images at once, encountering a scenario with more than X images should be managed gracefully. One possible solution could be limiting the image requests to X and providing a user with a message for the surplus (e.g., 5 images not loaded). This threshold, X, could perhaps even be user-configurable based on individual hardware capacities. Addressing your point about assets being grouped by month for the virtual scrollbar and justified layout: might it be possible to simply deactivate the virtual scrollbar in situations where the image volume of a month necessitates batching? In my perspective, having Immich operate without the virtual scrollbar is far more preferable than having it hang to the point of tab closure. Regarding loading images by day, I'm doubtful this will resolve the issue, as you pointed out, a large batch of images imported/uploaded without Exif data would be treated as coming from the same day. To wrap up, I'd like to emphasize that my intention is never to belittle or discredit the effort that's gone into Immich. When highlighting what I see as a design challenge, it's purely from a constructive feedback standpoint and in no way a critique of the project's overall value or the dedication of its contributors.
Author
Owner

@JosiahBull commented on GitHub (Sep 26, 2023):

Similar to other comments in this thread I'm extremely excited about Immich existing, and appreciative of the work that's gone into it. It's refreshing to have such a professional FOSS solution to host and share my family photos.

That being said, I think it's possible for non-professional photographers to exceed the limits imposed by the current design, and I think Immich should be able to gracefully handle large photo uploads across a single month.

I took a team trip a couple of years ago and we had a shared album for photos, which accumulated ~1500 photos in only 3 days. I think it's extremely feasible for 'vanilla' users to have a month that crashes the app when they scroll past it if they went traveling with a group at some point.

Notably I had a botched Google Takeout export which failed to provide json data for ~15k images. I'm aware this is an upstream issue with Google, but for me all these images are stacked on a single day, and I don't have recourse to resolve it beyond manually tagging them... :/

@JosiahBull commented on GitHub (Sep 26, 2023): Similar to other comments in this thread I'm extremely excited about Immich existing, and appreciative of the work that's gone into it. It's refreshing to have such a professional FOSS solution to host and share my family photos. That being said, I think it's possible for non-professional photographers to exceed the limits imposed by the current design, and I think Immich should be able to gracefully handle large photo uploads across a single month. I took a team trip a couple of years ago and we had a shared album for photos, which accumulated ~1500 photos in only 3 days. I think it's extremely feasible for 'vanilla' users to have a month that crashes the app when they scroll past it if they went traveling with a group at some point. Notably I had a botched Google Takeout export which failed to provide `json` data for ~15k images. I'm aware this is an upstream issue with Google, but for me all these images are stacked on a single day, and I don't have recourse to resolve it beyond manually tagging them... :/
Author
Owner

@jrasm91 commented on GitHub (Sep 26, 2023):

@curtwagner1984 @JosiahBull - I think those are all valid points and the system should definitely handle those situations (better at least). We can look at improving this in the future for sure. Some ideas that have been brought up include:

  • Truncating assets and showing an error message rather than crashing
  • Seeing if loading assets by day extends the range/scale in a meaningful way
  • Disabling the virtual scrollbar and using an alternative loading strategy for ranges with big groups

Thanks you for your feedback!

@jrasm91 commented on GitHub (Sep 26, 2023): @curtwagner1984 @JosiahBull - I think those are all valid points and the system should definitely handle those situations (better at least). We can look at improving this in the future for sure. Some ideas that have been brought up include: - Truncating assets and showing an error message rather than crashing - Seeing if loading assets by day extends the range/scale in a meaningful way - Disabling the virtual scrollbar and using an alternative loading strategy for ranges with big groups Thanks you for your feedback!
Author
Owner

@lachlan2k commented on GitHub (Sep 26, 2023):

Would it perhaps be viable to chunk the API requests, using some sort of offset-based pagination, for instance with optional limit and offset parameters? Such that the initial request would contain &limit=5000&offset=0 followed by &limit=5000&offset=5000, etc. until all the data has loaded.

@lachlan2k commented on GitHub (Sep 26, 2023): Would it perhaps be viable to chunk the API requests, using some sort of offset-based pagination, for instance with optional `limit` and `offset` parameters? Such that the initial request would contain `&limit=5000&offset=0` followed by `&limit=5000&offset=5000`, etc. until all the data has loaded.
Author
Owner

@dodg3r commented on GitHub (Oct 25, 2023):

Hi

This error is consistent with the problem I'm having.
The difference is that my main timelineview freezes completely and I can't move forward.
If I quickly click on "people" or "albums" or "explore" after logging in, I can scroll further, but as soon as I go back to the timeline, it freezes again.
I have 140,000 pictures and 11,000 of them are from the year 2023-10. Could that be the problem?

Can I edit the files directly in the library folder with EXIF?

@dodg3r commented on GitHub (Oct 25, 2023): Hi This error is consistent with the problem I'm having. The difference is that my main timelineview freezes completely and I can't move forward. If I quickly click on "people" or "albums" or "explore" after logging in, I can scroll further, but as soon as I go back to the timeline, it freezes again. I have 140,000 pictures and 11,000 of them are from the year 2023-10. Could that be the problem? Can I edit the files directly in the library folder with EXIF?
Author
Owner

@curtwagner1984 commented on GitHub (Oct 25, 2023):

Yeah, this is definitely the same issue. For me the browser tab completely
freezes and had to be killed. I have to navigate to the explore it people
tab manually.

On Wed, Oct 25, 2023, 13:56 Jerry Johansson @.***>
wrote:

Hi

This error is consistent with the problem I'm having.
The difference is that my main timelineview freezes completely and I can't
move forward.
If I quickly click on "people" or "albums" or "explore" after logging in,
I can scroll further, but as soon as I go back to the timeline, it freezes
again.
I have 140,000 pictures and 11,000 of them are from the year 2023-10.
Could that be the problem?

Can I edit the files directly in the library folder with EXIF?


Reply to this email directly, view it on GitHub
https://github.com/immich-app/immich/issues/4180#issuecomment-1779009687,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEZFOUKKVEX6C3EBB27GPVLYBDV7FAVCNFSM6AAAAAA5DRAK3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZZGAYDSNRYG4
.
You are receiving this because you were mentioned.Message ID:
@.***>

@curtwagner1984 commented on GitHub (Oct 25, 2023): Yeah, this is definitely the same issue. For me the browser tab completely freezes and had to be killed. I have to navigate to the explore it people tab manually. On Wed, Oct 25, 2023, 13:56 Jerry Johansson ***@***.***> wrote: > Hi > > This error is consistent with the problem I'm having. > The difference is that my main timelineview freezes completely and I can't > move forward. > If I quickly click on "people" or "albums" or "explore" after logging in, > I can scroll further, but as soon as I go back to the timeline, it freezes > again. > I have 140,000 pictures and 11,000 of them are from the year 2023-10. > Could that be the problem? > > Can I edit the files directly in the library folder with EXIF? > > — > Reply to this email directly, view it on GitHub > <https://github.com/immich-app/immich/issues/4180#issuecomment-1779009687>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEZFOUKKVEX6C3EBB27GPVLYBDV7FAVCNFSM6AAAAAA5DRAK3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZZGAYDSNRYG4> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@jrasm91 commented on GitHub (Oct 26, 2023):

I actually did some more research into this and I don't think it's actually a problem with loading that many assets at once, or sending that much data across the wire, it comes down to how granular of groups we use to render the assets in the web. We use an intersection observer, but it looks like it works at the per day level, so it will try to render all the assets for a given day as you are scrolling.

@jrasm91 commented on GitHub (Oct 26, 2023): I actually did some more research into this and I don't think it's actually a problem with loading that many assets at once, or sending that much data across the wire, it comes down to how granular of groups we use to render the assets in the web. We use an intersection observer, but it looks like it works at the per day level, so it _will_ try to render all the assets for a given day as you are scrolling.
Author
Owner

@curtwagner1984 commented on GitHub (Oct 26, 2023):

It would have been really, really, really, great if the user could override
this to a specific count. Like, render at most 20 images as I'm scrolling.

On Thu, Oct 26, 2023 at 12:00 AM Jason Rasmussen @.***>
wrote:

I actually did some more research into this and I don't think it's
actually a problem with loading that many assets at once, or sending that
much data across the wire, it comes down to how granular of groups we use
to render the assets in the web. We use an intersection observer, but it
looks like it works at the per day level, so it will try to render all
the assets for a given day as you are scrolling.


Reply to this email directly, view it on GitHub
https://github.com/immich-app/immich/issues/4180#issuecomment-1780046372,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEZFOUKMQ62LGDLV2R7GEBTYBF4V7AVCNFSM6AAAAAA5DRAK3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBQGA2DMMZXGI
.
You are receiving this because you were mentioned.Message ID:
@.***>

@curtwagner1984 commented on GitHub (Oct 26, 2023): It would have been really, really, really, great if the user could override this to a specific count. Like, render at most 20 images as I'm scrolling. On Thu, Oct 26, 2023 at 12:00 AM Jason Rasmussen ***@***.***> wrote: > I actually did some more research into this and I don't think it's > actually a problem with loading that many assets at once, or sending that > much data across the wire, it comes down to how granular of groups we use > to render the assets in the web. We use an intersection observer, but it > looks like it works at the per day level, so it *will* try to render all > the assets for a given day as you are scrolling. > > — > Reply to this email directly, view it on GitHub > <https://github.com/immich-app/immich/issues/4180#issuecomment-1780046372>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEZFOUKMQ62LGDLV2R7GEBTYBF4V7AVCNFSM6AAAAAA5DRAK3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBQGA2DMMZXGI> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@alextran1502 commented on GitHub (Oct 26, 2023):

@curtwagner1984 it will be an optimization for this mechanism. We will find sometimes to work on this

@alextran1502 commented on GitHub (Oct 26, 2023): @curtwagner1984 it will be an optimization for this mechanism. We will find sometimes to work on this
Author
Owner

@curtwagner1984 commented on GitHub (Oct 27, 2023):

Thank you for considering my request for this optimization. 🙏

On Thu, Oct 26, 2023, 18:43 Alex @.***> wrote:

@curtwagner1984 https://github.com/curtwagner1984 it will be an
optimization for this mechanism. We will find sometimes to work on this


Reply to this email directly, view it on GitHub
https://github.com/immich-app/immich/issues/4180#issuecomment-1781381690,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEZFOUN5R6JT466M64YKU33YBKAKPAVCNFSM6AAAAAA5DRAK3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBRGM4DCNRZGA
.
You are receiving this because you were mentioned.Message ID:
@.***>

@curtwagner1984 commented on GitHub (Oct 27, 2023): Thank you for considering my request for this optimization. 🙏 On Thu, Oct 26, 2023, 18:43 Alex ***@***.***> wrote: > @curtwagner1984 <https://github.com/curtwagner1984> it will be an > optimization for this mechanism. We will find sometimes to work on this > > — > Reply to this email directly, view it on GitHub > <https://github.com/immich-app/immich/issues/4180#issuecomment-1781381690>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEZFOUN5R6JT466M64YKU33YBKAKPAVCNFSM6AAAAAA5DRAK3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBRGM4DCNRZGA> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
Author
Owner

@peca89 commented on GitHub (Nov 3, 2023):

I'm also very much excited about Immich being actively developped. Since local library feature is introduced, I'm in the process of evaluating Immich's performance compared to Plex in order for it to become main picture archive viewer. I find myself quite bothered about existance of this bug/feature that I need to point out why this would be a dealbreaker for me unless it's fixed.

My personal library contains a folder with a bunch of scanned films where all files share the same date. Because for the most of films even the approximate date of capturing is unknown, I decided to batch set file and exif date for all these files to a random date in 1990. Anything else, like randomizing the date, would sort the pictures completely incorrectly. It's about 10000 photos (more than 200 films were scanned) that would show as if they were the same date.

@peca89 commented on GitHub (Nov 3, 2023): I'm also very much excited about Immich being actively developped. Since local library feature is introduced, I'm in the process of evaluating Immich's performance compared to Plex in order for it to become main picture archive viewer. I find myself quite bothered about existance of this bug/feature that I need to point out why this would be a dealbreaker for me unless it's fixed. My personal library contains a folder with a bunch of scanned films where all files share the same date. Because for the most of films even the approximate date of capturing is unknown, I decided to batch set file and exif date for all these files to a random date in 1990. Anything else, like randomizing the date, would sort the pictures completely incorrectly. It's about 10000 photos (more than 200 films were scanned) that would show as if they were the same date.
Author
Owner

@jrasm91 commented on GitHub (Nov 3, 2023):

It would have been really, really, really, great if the user could override this to a specific count. Like, render at most 20 images as I'm scrolling.

That doesn't really make sense IMO. Having specific settings like "how many images to load at once" it just a symptom of a problem not a valid solution, especially if the implementation is non-trivial. Just fix the original issue at that point.

@jrasm91 commented on GitHub (Nov 3, 2023): > It would have been really, really, really, great if the user could override this to a specific count. Like, render at most 20 images as I'm scrolling. That doesn't really make sense IMO. Having specific settings like "how many images to load at once" it just a symptom of a problem not a valid solution, especially if the implementation is non-trivial. Just fix the original issue at that point.
Author
Owner

@curtwagner1984 commented on GitHub (Dec 20, 2023):

It would have been really, really, really, great if the user could override this to a specific count. Like, render at most 20 images as I'm scrolling.

That doesn't really make sense IMO. Having specific settings like "how many images to load at once" it just a symptom of a problem not a valid solution, especially if the implementation is non-trivial. Just fix the original issue at that point.

Yeah, you're right. Resolving the underlying issue would defiantly be preferable. However, if it's too time consuming, at the very least, a temporary patch that would allow setting the max images to fetch and render would enable the app to run instead of crushing the browser tab.

@curtwagner1984 commented on GitHub (Dec 20, 2023): > > It would have been really, really, really, great if the user could override this to a specific count. Like, render at most 20 images as I'm scrolling. > > That doesn't really make sense IMO. Having specific settings like "how many images to load at once" it just a symptom of a problem not a valid solution, especially if the implementation is non-trivial. Just fix the original issue at that point. Yeah, you're right. Resolving the underlying issue would defiantly be preferable. However, if it's too time consuming, at the very least, a temporary patch that would allow setting the max images to fetch and render would enable the app to run instead of crushing the browser tab.
Author
Owner

@mertalev commented on GitHub (Feb 3, 2024):

Building on some earlier suggestions, we could truncate the number of assets per bucket to 1000 and allow buckets to be interacted with (e.g. by making the date clickable) to change to a paginated gallery viewer without the timeline on the side.

Possible UX refinements include having the interaction at the end of a bucket instead, only displaying this button if the assets were truncated, and possibly skipping to the second page when fetching in the gallery viewer (the first page already having been viewed).

@mertalev commented on GitHub (Feb 3, 2024): Building on some earlier suggestions, we could truncate the number of assets per bucket to 1000 and allow buckets to be interacted with (e.g. by making the date clickable) to change to a paginated gallery viewer without the timeline on the side. Possible UX refinements include having the interaction at the end of a bucket instead, only displaying this button if the assets were truncated, and possibly skipping to the second page when fetching in the gallery viewer (the first page already having been viewed).
Author
Owner

@Handrail9 commented on GitHub (Feb 28, 2024):

From my current understanding #4191 won't fix this issue? What if a user legitimately has several thousand photos on a single day, or even from a single hour?

A common instance I can think of is a user who copies a Gphotos library over without applying exif - all items will be on the same date. There's needs to be some item-count based batching instead of date-based batching to resolve this permanently.

Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user.

Hello, I am popping into this issue to say this is the exact issue I'm facing at the moment, a lot of my photos carried over from a GPhotos archive from a deleted G account from roughly 2021 are all showing as being from June '23, roughly 43k images actually. I can get as far as selecting all of them after about 10 minutes of waiting, however I can't move them to an album, archive them, or delete them. On the Android app only 2,009 of them show up, which I tried to archive to get more to show up, and none did, so roughly 41k just do not show up on the Android app. I'm hoping this can be an issue that's fixed rather than left for people in this scenario to "find a better solution" because Immich is a beautiful and amazing looking app from a UI/UX perspective and I don't want to go back to using lackluster solutions.

@Handrail9 commented on GitHub (Feb 28, 2024): > From my current understanding #4191 won't fix this issue? What if a user legitimately has several thousand photos on a single day, or even from a single hour? > > A common instance I can think of is a user who copies a Gphotos library over without applying exif - all items will be on the same date. There's needs to be some item-count based batching instead of date-based batching to resolve this permanently. > Practically, it hasn't happened yet where somebody has so many photos that the segment by month strategy has failed, minus this case. It doesn't sound like the expected results are 50k photos either. For now, I am fine leaving the implementation as is and seeing if there is a legitimate use case for 5k+, 10k+, etc. photos in a single month by a single user. Hello, I am popping into this issue to say this is the exact issue I'm facing at the moment, a lot of my photos carried over from a GPhotos archive from a deleted G account from roughly 2021 are all showing as being from June '23, roughly 43k images actually. I can get as far as selecting all of them after about 10 minutes of waiting, however I can't move them to an album, archive them, or delete them. On the Android app only 2,009 of them show up, which I tried to archive to get more to show up, and none did, so roughly 41k just do not show up on the Android app. I'm hoping this can be an issue that's fixed rather than left for people in this scenario to "find a better solution" because Immich is a beautiful and amazing looking app from a UI/UX perspective and I don't want to go back to using lackluster solutions.
Author
Owner

@alextran1502 commented on GitHub (Feb 28, 2024):

@Wave6677 We have yet to solve this issue since it currently affects a small portion of the user base. However, this is something that we are planning to fix eventually.

Right now, we are fetching the assets in a bucket grouped by month. We can modify this to give it a better logic to group it by date when the monthly asset is too many.

We have a solution, just not have the resource to get around to implementing this yet

@alextran1502 commented on GitHub (Feb 28, 2024): @Wave6677 We have yet to solve this issue since it currently affects a small portion of the user base. However, this is something that we are planning to fix eventually. Right now, we are fetching the assets in a bucket grouped by month. We can modify this to give it a better logic to group it by date when the monthly asset is too many. We have a solution, just not have the resource to get around to implementing this yet
Author
Owner

@Handrail9 commented on GitHub (Feb 28, 2024):

@alextran1502

We can modify this to give it a better logic to group it by date when the monthly asset is too many.

This might sound like a silly question, as I am in no way an experienced developer, but would that work for a large amount of images coming from a zip/tar archive without exif data? For instance, how I have 43k photos all dated "June 1st 2023"

@Handrail9 commented on GitHub (Feb 28, 2024): @alextran1502 > We can modify this to give it a better logic to group it by date when the monthly asset is too many. This might sound like a silly question, as I am in no way an experienced developer, but would that work for a large amount of images coming from a zip/tar archive without exif data? For instance, how I have 43k photos all dated "June 1st 2023"
Author
Owner

@alextran1502 commented on GitHub (Feb 28, 2024):

@Wave6677 It means the photos don't have proper EXIF information. I could have been removed when you ingest/extract from a different software.

@alextran1502 commented on GitHub (Feb 28, 2024): @Wave6677 It means the photos don't have proper EXIF information. I could have been removed when you ingest/extract from a different software.
Author
Owner

@mertalev commented on GitHub (Feb 28, 2024):

I agree we should handle assets in the same month or even day gracefully. Missing EXIF data should ideally just mean the date comes up wrong on the timeline. It's unexpected for it to make the web app crawl.

@mertalev commented on GitHub (Feb 28, 2024): I agree we should handle assets in the same month or even day gracefully. Missing EXIF data should ideally just mean the date comes up wrong on the timeline. It's unexpected for it to make the web app crawl.
Author
Owner

@freekk commented on GitHub (Apr 2, 2024):

Hello, just a quick note to say that this is indeed a real issue for many people around me.
It's as simple as gathering photos from friends on a special event like a wedding. I easily end up with thousands of photos on the same hour!

It happens all the time in my timeline, rendering immich totally unusable for now.
I really hope this will be fixed one day as I really like your app and you did a wonderful job on all other aspects!

@freekk commented on GitHub (Apr 2, 2024): Hello, just a quick note to say that this is indeed a real issue for many people around me. It's as simple as gathering photos from friends on a special event like a wedding. I easily end up with thousands of photos on the same hour! It happens all the time in my timeline, rendering immich totally unusable for now. I really hope this will be fixed one day as I really like your app and you did a wonderful job on all other aspects!
Author
Owner

@rezzorix commented on GitHub (Apr 7, 2024):

Just got v1.101.0 up and running yesterday.
I added a library of ~50.000 photos which immich seems not be able to handle with its gui.
Especially the web interface (tested in different browsers FF, Chromium based, Safari) has major performance problems.

I have setup immich on a VM with 8 cores and 16 GB RAM.
When opening the web interface I am simultaneously checking usage of the VM and my local machine (CPU, RAM, Disk, Network).

All looks OK, resources are sufficiently available, nothing seems to hit the limit.
However, immich performance is very poor, to the point of browser being unusable despite having enough resources.

How could this be resolved?

@rezzorix commented on GitHub (Apr 7, 2024): Just got v1.101.0 up and running yesterday. I added a library of ~50.000 photos which immich seems not be able to handle with its gui. Especially the web interface (tested in different browsers FF, Chromium based, Safari) has major performance problems. I have setup immich on a VM with 8 cores and 16 GB RAM. When opening the web interface I am simultaneously checking usage of the VM and my local machine (CPU, RAM, Disk, Network). All looks OK, resources are sufficiently available, nothing seems to hit the limit. However, immich performance is very poor, to the point of browser being unusable despite having enough resources. How could this be resolved?
Author
Owner

@mertalev commented on GitHub (Apr 7, 2024):

Do all these images appear as being on the same month? If so, the current solution would be to fix their metadata with exiftool and run metadata extraction on all assets.

@mertalev commented on GitHub (Apr 7, 2024): Do all these images appear as being on the same month? If so, the current solution would be to fix their metadata with exiftool and run metadata extraction on all assets.
Author
Owner

@curtwagner1984 commented on GitHub (Apr 21, 2024):

Do all these images appear as being on the same month? If so, the current solution would be to fix their metadata with exiftool and run metadata extraction on all assets.

As previously discussed, this isn't always feasible in various scenarios.

For instance: There might be nothing wrong with the metadata, and the user maybe be just taking/creating a lot of images in a given month. Also, the metadata might not be available.

This issue renders immich completely unusable for people who encounter this issue. The only workaround I see on the user side is to try to artificially split the images from the same month into groups of different months by manufactuing a fictitious metadata.

Can't this issue be resloved with virtualization? What I mean is, only fetch the count of images for each month and calculate the side bar (where the months appear) based on the count. And then only load a subset of that count in the viewport.

At the very bare minimum, just have a maximum number of images that can be loaded in a single month so instead of crushing the UI if you have too many images in a certain month, it will show the first X of that amount. In the very least the webui would still function with this option.

@curtwagner1984 commented on GitHub (Apr 21, 2024): > Do all these images appear as being on the same month? If so, the current solution would be to fix their metadata with exiftool and run metadata extraction on all assets. As previously discussed, this isn't always feasible in various scenarios. For instance: There might be nothing wrong with the metadata, and the user maybe be just taking/creating a lot of images in a given month. Also, the metadata might not be available. This issue renders immich **completely unusable** for people who encounter this issue. The only workaround I see on the user side is to try to artificially split the images from the same month into groups of different months by manufactuing a fictitious metadata. Can't this issue be resloved with virtualization? What I mean is, only fetch the count of images for each month and calculate the side bar (where the months appear) based on the count. And then only load a subset of that count in the viewport. At the very bare minimum, just have a maximum number of images that can be loaded in a single month so instead of crushing the UI if you have too many images in a certain month, it will show the first X of that amount. In the very least the webui would still function with this option.
Author
Owner

@mertalev commented on GitHub (Apr 21, 2024):

Yes, that suggestion is just for the current state of things. We plan to make improvements in this area. It can dynamically split buckets into days if there are too many assets in a month and truncate the number shown for a day if there are still too many to display.

@mertalev commented on GitHub (Apr 21, 2024): Yes, that suggestion is just for the current state of things. We plan to make improvements in this area. It can dynamically split buckets into days if there are too many assets in a month and truncate the number shown for a day if there are still too many to display.
Author
Owner

@curtwagner1984 commented on GitHub (Apr 21, 2024):

Yes, that suggestion is just for the current state of things. We plan to make improvements in this area. It can dynamically split buckets into days if there are too many assets in a month and truncate the number shown for a day if there are still too many to display.

I'm really happy to hear that, because spliting into days won't solve the issue for images that just don't have metadata, they would be counted as being form the same month, same day, same hour, same second.

Does this problem exist also with search results? As in, if your query returns too many images, then the UI crashes?

@curtwagner1984 commented on GitHub (Apr 21, 2024): > Yes, that suggestion is just for the current state of things. We plan to make improvements in this area. It can dynamically split buckets into days if there are too many assets in a month and truncate the number shown for a day if there are still too many to display. I'm really happy to hear that, because spliting into days won't solve the issue for images that just don't have metadata, they would be counted as being form the same month, same day, same hour, same second. Does this problem exist also with search results? As in, if your query returns too many images, then the UI crashes?
Author
Owner

@mertalev commented on GitHub (Apr 21, 2024):

Does this problem exist also with search results?

No, search doesn't use the timeline so it's unaffected. It's paginated, so it'll load 100, then scrolling further will load another 100, and so on.

@mertalev commented on GitHub (Apr 21, 2024): > Does this problem exist also with search results? No, search doesn't use the timeline so it's unaffected. It's paginated, so it'll load 100, then scrolling further will load another 100, and so on.
Author
Owner

@curtwagner1984 commented on GitHub (May 3, 2024):

can we compile to wasm? something like py2wasm or have components that are compiled to wasm at build time. Im getting "only" 65915 files able to be uploaded at a time when I've selected about 1.3m. We could chunk this send a certain amount to a background process/queue that is isolated from the ui so there is not freezing of the UI thread?

I fail to see what this has to do with the issue currently discussed.

@curtwagner1984 commented on GitHub (May 3, 2024): > can we compile to wasm? something like py2wasm or have components that are compiled to wasm at build time. Im getting "only" 65915 files able to be uploaded at a time when I've selected about 1.3m. We could chunk this send a certain amount to a background process/queue that is isolated from the ui so there is not freezing of the UI thread? I fail to see what this has to do with the issue currently discussed.
Author
Owner

@sutusa commented on GitHub (May 8, 2024):

I don't mean to dog pile on this issue (sorry), but I'm seeing considerable slowness as well with my library. Specifically with the Albums page than the Timeline... but I feel my situation is mostly applicable here.

  • Running v.1.103.1 via docker-compose
  • Assets are referenced through a read-only External Library
  • 28,665 photos
  • 1,528 albums
  • Most albums are shared with 2 viewer-only users (aka: family members so they can download the photos they like).

When I take pictures, the albums I create are per that day. So, in a given year I could have up to 365 albums. I seem to average about ~200 albums each year. I don't take pictures every day, with maybe 15-30 photos with my Fuji camera as an average for the day. Maybe a lot more for special events like a kid's birthday or something. So, I don't think my use-case is out of the ordinary or unreasonable.

Not sure if my library counts as a "large library", but from my perspective, the purpose of any image database is to potentially store a lifetime of photos for easy reference. I would argue that large libraries shouldn't take a back seat and/or be 2nd-class citizens.

With that said, when navigating to the Albums page, there is a long 3-5sec pause/hang as the page downloads 4Mb JSON responses from the API (GET requests against /api/album):

(firefox calls)
001

(firefox headers)
002

Not surprisingly, I have seen the same wait times when doing manual curl requests against the same /api/album path when writing some quality-of-life scripts for myself.

Looking at the request's response ... it seems, too verbose?

(curl response, pipe'd through jq for prettifying)
003

That is all the data that is returned per album (and I have ~1500 albums which will grow over time). Seems like a lot of data. Especially when the page's goal seems to be only leveraging each album's Title, Thumbnail, Date, Asset count, and User Permissions.

Not sure what a good approach might be to resolve. My first thought goes towards something like using GraphQL to help reduce unwanted information in the response, as only what is requested is what is returned. That might be a heavy lift though. Maybe custom/partial REST responses? Pagination? Dunno.

I have experience in architecture, devops and operational things... and I code for fun occasionally... but I do NOT view myself as a professional web developer or anything. So, feel free to correct me if I'm wrong here on any of this! I just wanted to give my perspective based on what I have seen.

Also, thank you for your time and effort on such a wonderful project! I dropped Flickr like a hot potato when I found your project earlier this year! I love it!

@sutusa commented on GitHub (May 8, 2024): I don't mean to dog pile on this issue (sorry), but I'm seeing considerable slowness as well with my library. Specifically with the Albums page than the Timeline... but I feel my situation is mostly applicable here. - Running `v.1.103.1` via docker-compose - Assets are referenced through a read-only External Library - `28,665` photos - `1,528` albums - Most albums are shared with 2 viewer-only users (aka: family members so they can download the photos they like). When I take pictures, the albums I create are per that day. So, in a given year I could have up to 365 albums. I seem to average about ~200 albums each year. I don't take pictures every day, with maybe 15-30 photos with my Fuji camera as an average for the day. Maybe a lot more for special events like a kid's birthday or something. So, I don't think my use-case is out of the ordinary or unreasonable. Not sure if my library counts as a "large library", but from my perspective, the purpose of any image database is to potentially store a lifetime of photos for easy reference. I would argue that large libraries shouldn't take a back seat and/or be 2nd-class citizens. With that said, when navigating to the Albums page, there is a long 3-5sec pause/hang as the page downloads 4Mb JSON responses from the API (`GET` requests against `/api/album`): **(firefox calls)** <img width="1912" alt="001" src="https://github.com/immich-app/immich/assets/39308551/37b09959-9dff-488c-bb12-14fd712d000e"> **(firefox headers)** <img width="355" alt="002" src="https://github.com/immich-app/immich/assets/39308551/008e848c-e456-41d3-bc44-3f36a2754bec"> Not surprisingly, I have seen the same wait times when doing manual curl requests against the same `/api/album` path when writing some quality-of-life scripts for myself. Looking at the request's response ... it seems, too verbose? **(curl response, pipe'd through jq for prettifying)** <img width="263" alt="003" src="https://github.com/immich-app/immich/assets/39308551/4d26e256-32c6-474b-90a2-c9c9476db9ca"> That is all the data that is returned _per_ album (and I have ~1500 albums which will grow over time). Seems like a lot of data. Especially when the page's goal seems to be only leveraging each album's Title, Thumbnail, Date, Asset count, and User Permissions. Not sure what a good approach might be to resolve. My first thought goes towards something like using GraphQL to help reduce unwanted information in the response, as only what is requested is what is returned. That might be a heavy lift though. Maybe custom/partial REST responses? Pagination? Dunno. I have experience in architecture, devops and operational things... and I code for fun occasionally... but I do NOT view myself as a professional web developer or anything. So, feel free to correct me if I'm wrong here on any of this! I just wanted to give my perspective based on what I have seen. Also, thank you for your time and effort on such a wonderful project! I dropped Flickr like a hot potato when I found your project earlier this year! I love it!
Author
Owner

@alextran1502 commented on GitHub (May 8, 2024):

@sutusa No problem. The album page hasn't been tuned for that many albums' use cases yet, so it doesn't have lazy loading or chunk fetching. It explains the issue that you are seeing.

@alextran1502 commented on GitHub (May 8, 2024): @sutusa No problem. The album page hasn't been tuned for that many albums' use cases yet, so it doesn't have lazy loading or chunk fetching. It explains the issue that you are seeing.
Author
Owner

@curtwagner1984 commented on GitHub (Jun 24, 2024):

Hi, I just wanted to inquire about whether or not there is any progress on this issue?
I looked at the release notes of the last few releases and didn't find it mentioned. Also, the issue remains open. So it doesn't bode well.

I'm sure this isn't that high up on the to-do list, but I just want to reiterate that this is a significant issue where the application just freezes and crushes if it receives unexpected input. Fixing this to the point where the application wouldn't crush would be also appreciated if implementing the wanted behavior is too time-consuming.

Maybe this doesn't affect a lot of people, but for those affected the timeline is unusable. (I understand that the albums and people tab is unaffected by this, but maybe this is wrong as the people table also has a timeline, if that's the case it's unusable too if you have too many images of a specific person from a specific time)

@curtwagner1984 commented on GitHub (Jun 24, 2024): Hi, I just wanted to inquire about whether or not there is any progress on this issue? I looked at the release notes of the last few releases and didn't find it mentioned. Also, the issue remains open. So it doesn't bode well. I'm sure this isn't that high up on the to-do list, but I just want to reiterate that this is a significant issue where the application just freezes and crushes if it receives unexpected input. Fixing this to the point where the application wouldn't crush would be also appreciated if implementing the wanted behavior is too time-consuming. Maybe this doesn't affect a lot of people, but for those affected the timeline is unusable. (I understand that the albums and people tab is unaffected by this, but maybe this is wrong as the people table also has a timeline, if that's the case it's unusable too if you have too many images of a specific person from a specific time)
Author
Owner

@alextran1502 commented on GitHub (Jun 24, 2024):

@curtwagner1984 I started some work on this #9935, but that approach wasn't the best, so I will implement it a different way. However, this issue will be resolved before the stable release

@alextran1502 commented on GitHub (Jun 24, 2024): @curtwagner1984 I started some work on this #9935, but that approach wasn't the best, so I will implement it a different way. However, this issue will be resolved before the stable release
Author
Owner

@curtwagner1984 commented on GitHub (Jun 24, 2024):

@curtwagner1984 I started some work on this #9935, but that approach wasn't the best, so I will implement it a different way. However, this issue will be resolved before the stable release

Hey, thank you so much for replying so quickly and for all the hard work you're putting into this!

I saw the PR you linked, and I agree with you that this might not be the best approach, loading by intervals of days instead of months won't solve the underlying issue because a large batch of images can still come in a smaller time frame. , I believe the core issue is that the application can't handle a large specific input of assets at one time.

In my opinion, decoupling the amount of images loaded from the time span of when the image was taken is crucial for the resolution of this issue.

Not sure if this is technically feasible, but one idea is to implement virtualization for the timeline, arranging images in buckets based on image count rather than time periods. Each bucket would have start and end times to ensure correct placement in the timeline.

I have limited knowledge about how the current implementation works, but from what I gather, all the images in the library a divided to monthly buckets based on the time the image was taken, and then the timeline loads those buckets from the most recent to the last. The arrangement to months helps the sidescroller be in the correct position.

I think maybe this still be the case if images are loaded in buckets of X max images per bucket instead of time period and still arranged from the latest to the oldest, when you load a month, you can keep track of which bucket you're currently are for that month.

By loading a maximum of one bucket of X images at a time, even if there are 100*X images from the same date, only one bucket would be loaded at once. The start and end times would ensure the buckets are correctly ordered in the timeline.

I understand this might be something you've already considered and found to be technically challenging. I'm just sharing my thoughts in the hope they might be useful.

Thank you again for your dedication to resolving this issue. Your work is highly appreciated!

EDIT: Loading buckets of X images instead of dates will also improve overall performance because you still will have months with more images than others, even if this amount won't crush the UI, it will slow it down and the UX will be worse If you load 100 images at the time, the scrolling experience would be smooth as butter, no matter the months

@curtwagner1984 commented on GitHub (Jun 24, 2024): > @curtwagner1984 I started some work on this #9935, but that approach wasn't the best, so I will implement it a different way. However, this issue will be resolved before the stable release Hey, thank you so much for replying so quickly and for all the hard work you're putting into this! I saw the PR you linked, and I agree with you that this might not be the best approach, loading by intervals of days instead of months won't solve the underlying issue because a large batch of images can still come in a smaller time frame. , I believe the core issue is that the application can't handle a large specific input of assets at one time. In my opinion, decoupling the amount of images loaded from the time span of when the image was taken is crucial for the resolution of this issue. Not sure if this is technically feasible, but one idea is to implement virtualization for the timeline, arranging images in buckets based on image count rather than time periods. Each bucket would have start and end times to ensure correct placement in the timeline. I have limited knowledge about how the current implementation works, but from what I gather, all the images in the library a divided to monthly buckets based on the time the image was taken, and then the timeline loads those buckets from the most recent to the last. The arrangement to months helps the sidescroller be in the correct position. I think maybe this still be the case if images are loaded in buckets of X max images per bucket instead of time period and still arranged from the latest to the oldest, when you load a month, you can keep track of which bucket you're currently are for that month. By loading a maximum of one bucket of X images at a time, even if there are 100*X images from the same date, only one bucket would be loaded at once. The start and end times would ensure the buckets are correctly ordered in the timeline. I understand this might be something you've already considered and found to be technically challenging. I'm just sharing my thoughts in the hope they might be useful. Thank you again for your dedication to resolving this issue. Your work is highly appreciated! EDIT: Loading buckets of X images instead of dates will also improve overall performance because you still will have months with more images than others, even if this amount won't crush the UI, it will slow it down and the UX will be worse If you load 100 images at the time, the scrolling experience would be smooth as butter, no matter the months
Author
Owner

@alextran1502 commented on GitHub (Jun 24, 2024):

@curtwagner1984 No worries. It is good to check in often to keep us informed as well. The timeline is already lazily loaded based on the bucket, my plan is to return the asset count of each bucket, and depending on the number of the bucket, it will use the corresponding grouping strategy i.e. day vs month

@alextran1502 commented on GitHub (Jun 24, 2024): @curtwagner1984 No worries. It is good to check in often to keep us informed as well. The timeline is already lazily loaded based on the bucket, my plan is to return the asset count of each bucket, and depending on the number of the bucket, it will use the corresponding grouping strategy i.e. day vs month
Author
Owner

@curtwagner1984 commented on GitHub (Jun 25, 2024):

But aren't the current buckets based on months? As in, it loads one month at a time?

I unsure I understand though. Let's say the app crushes if you load more than 1000 asssts at once. And the scenario is that the user has 2000 images timestamped with the same date. So first you'll try the month buckets, and you see the asset count is 2000, the. You'll try the daily buckets, and you'll still get a count of 2000. So in this case increasing the resolution form months to days doesn't resolve the issue.

On the other hand, if you say that each unit of lazy load can be max of 500 assets, then you'll never run into the issue, regardless of their date of creation.

Unless Im missing something or don't understand something.

@curtwagner1984 commented on GitHub (Jun 25, 2024): But aren't the current buckets based on months? As in, it loads one month at a time? I unsure I understand though. Let's say the app crushes if you load more than 1000 asssts at once. And the scenario is that the user has 2000 images timestamped with the same date. So first you'll try the month buckets, and you see the asset count is 2000, the. You'll try the daily buckets, and you'll still get a count of 2000. So in this case increasing the resolution form months to days doesn't resolve the issue. On the other hand, if you say that each unit of lazy load can be max of 500 assets, then you'll never run into the issue, regardless of their date of creation. Unless Im missing something or don't understand something.
Author
Owner

@jrasm91 commented on GitHub (Jun 25, 2024):

I think that is correct. How would something like that work with the virtual scrollbar? Can you still jump to a specific point in time? It is going to be a different implementation to just use pagination, which the timeline buckets don't even support right now. I think the point is moving to per day is easy enough with the current implementation vs moving to pagination is not. So this is an easy win and we can tackle an alternative implementation later

@jrasm91 commented on GitHub (Jun 25, 2024): I think that is correct. How would something like that work with the virtual scrollbar? Can you still jump to a specific point in time? It is going to be a different implementation to just use pagination, which the timeline buckets don't even support right now. I think the point is moving to per day is easy enough with the current implementation vs moving to pagination is not. So this is an easy win and we can tackle an alternative implementation later
Author
Owner

@curtwagner1984 commented on GitHub (Jun 26, 2024):

How would something like that work with the virtual scrollbar? Can you still jump to a specific point in time?

Depends what you mean by "works". Currently, the virtual scrollbar is time-dependent. So let's take an edge case scenario, where all your assets are timestamped with the same date and time. The virtual scrollbar will show that time, and and if there 's a tie in the timestamp (like it is for all images in this case) then the app would just display them in the order they were added to immich database. So in this case you won't be able to jump to a time, because there is only one date and time.

In case, only a portion of your images are timestamped at the same time, I don't see why you wouldn't be able to jump to a certain date. It's just that the date at which there are many images will be 'fatter' relative to the others.

I think the point is moving to per day is easy enough with the current implementation vs moving to pagination is not

I think pagination will increase overall stability and smoothness of the app, regardless of this issue. Because all new batches of images that would be loaded will be the same size. (maybe even user-configurable based on their hardware.) So you have control over the amount of images loaded each time. Yet when you are based on dates, you just hope that the user's images are distributed far enough in dates that it won't slow down or crush the webui.

@curtwagner1984 commented on GitHub (Jun 26, 2024): > How would something like that work with the virtual scrollbar? Can you still jump to a specific point in time? Depends what you mean by "works". Currently, the virtual scrollbar is time-dependent. So let's take an edge case scenario, where all your assets are timestamped with the same date and time. The virtual scrollbar will show that time, and and if there 's a tie in the timestamp (like it is for all images in this case) then the app would just display them in the order they were added to immich database. So in this case you won't be able to jump to a time, because there is only one date and time. In case, only a portion of your images are timestamped at the same time, I don't see why you wouldn't be able to jump to a certain date. It's just that the date at which there are many images will be 'fatter' relative to the others. >I think the point is moving to per day is easy enough with the current implementation vs moving to pagination is not I think pagination will increase overall stability and smoothness of the app, regardless of this issue. Because all new batches of images that would be loaded will be the same size. (maybe even user-configurable based on their hardware.) So you have control over the amount of images loaded each time. Yet when you are based on dates, you just hope that the user's images are distributed far enough in dates that it won't slow down or crush the webui.
Author
Owner

@jrasm91 commented on GitHub (Jun 27, 2024):

If you are talking about pagination within date buckets that might work. If you are saying replace time buckets with pagination that is basically a rewrite of the rendering process and likely to be more involved to implement.

@jrasm91 commented on GitHub (Jun 27, 2024): If you are talking about pagination within date buckets that might work. If you are saying _replace_ time buckets with pagination that is basically a rewrite of the rendering process and likely to be more involved to implement.
Author
Owner

@curtwagner1984 commented on GitHub (Jul 3, 2024):

If you are talking about pagination within date buckets that might work.

This is exactly what I'm talking about, And IMHO this is a good idea regardless of this issue because you want the amount of images loaded by each request to be deterministic.

Also, this solution effectively be a replacement of buckets with pagination if all a user has is a single bucket.

@curtwagner1984 commented on GitHub (Jul 3, 2024): > If you are talking about pagination within date buckets that might work. This is exactly what I'm talking about, And IMHO this is a good idea regardless of this issue because you want the amount of images loaded by each request to be deterministic. Also, this solution effectively be a replacement of buckets with pagination if all a user has is a single bucket.
Author
Owner

@The-Real-Thisas commented on GitHub (Aug 13, 2024):

@curtwagner1984 does the current scrollbar not paginate already ?

I'd assume the logical setup for this is only loading a set amount of images, say 30-40, and then loading more on scroll that way it feels relatively seamless with fast load times kinda like how tiktok scroll works. I think this can be pretty easily added on the web client with a scroll trigger.

In this situation since we don't have the timestamp for when the image was taken we can paginate based on the timestamp added to immich which I'm assuming we have, if not when maybe we can do some basic sort on the bucket and return based on that with like something arbitrary but repeatable, this will have to be properly considered.

Should be pretty easy to modify the backend to add the pagination behaviour without fully removing the bucket system which might be a pain and instead just have another endpoint with the pagination which can take in a parameter like, 30-40 which will filter the bucket and return that range.

Also, I'm new to this project but have worked with sveltekit for a while and have experience with the tech stack so I'm willing to contribute.

@The-Real-Thisas commented on GitHub (Aug 13, 2024): @curtwagner1984 does the current scrollbar not paginate already ? I'd assume the logical setup for this is only loading a set amount of images, say 30-40, and then loading more on scroll that way it feels relatively seamless with fast load times kinda like how tiktok scroll works. I think this can be pretty easily added on the web client with a scroll trigger. In this situation since we don't have the timestamp for when the image was taken we can paginate based on the timestamp added to immich which I'm assuming we have, if not when maybe we can do some basic sort on the bucket and return based on that with like something arbitrary but repeatable, this will have to be properly considered. Should be pretty easy to modify the backend to add the pagination behaviour without fully removing the bucket system which might be a pain and instead just have another endpoint with the pagination which can take in a parameter like, 30-40 which will filter the bucket and return that range. Also, I'm new to this project but have worked with sveltekit for a while and have experience with the tech stack so I'm willing to contribute.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: immich-app/immich#1354