* bug fix
* added few more type hint
* onMount removed, removed current user to user
* user check removed and conflict in view mode resolved between option and share info modal
* format fix
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* Update backup-and-restore.md
changelog:
Add database name to the restore command and document it in the notes
* docs: remove added database flag and change warn wording
* docs: fix forgotten warning change
Co-authored-by: Matthew Momjian <50788000+mmomjian@users.noreply.github.com>
---------
Co-authored-by: Matthew Momjian <50788000+mmomjian@users.noreply.github.com>
* fix(deps): update dependency exiftool-vendored to v28.3.0
* feat(server): parse offset from "Image_UTC_Data" (Samsung)
A Samsung phone might provide the local time (e.g. 09:00) without any timezone or
offset information. If the file also includes the non-standard trailer tag
"TimeStamp" in "Image_UTC_Data", we can use the unix timestamp contained within to
deduce the offset.
As an example, if the local date/time is "2024-09-15T09:00" and the unix timestamp is
1726408800 (which is 2024-09-15T16:00 UTC), we know that the offset is -07:00.
The actual computation/fix is done in exiftool-vendored.
Also see
0f63a78090/lib/Image/ExifTool/Samsung.pm (L996-L1001)https://github.com/photostructure/exiftool-vendored.js/issues/209
---------
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
* refactor(mobile): DB repository for asset, backup, sync service
* review feedback
* fix bug found by Alex
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* add packages
* create download task
* show progress
* save video and image
* show progress info
* live photo wip
* download and link live photos
* Update list of assets
* wip
* correct progress
* add state to download
* revert unncessary change
* repository pattern
* translation
* remove unused code
* update method call from repository
* remove unused variable
* handle multiple livephotos download
* remove logging statement
* lint
* not removing all records
* fix(web): modal sticky bottom scrolling
* chore: minor styling tweaks
* wip: add portal so modals show on Safari in detail panel
* feat: fixed position dropdown menu
* chore: refactoring and cleanup
* feat: zooming and virtual keyboard working for iPadOS/Safari
* Revert "feat: zooming and virtual keyboard working for iPadOS/Safari"
This reverts commit cac29bac0d.
* wip: minor code cleanup
* wip: recover from visual viewport changes
* wip: ease in a little more visualviewport magic
* wip: code cleanup
* fix: only show dropdown above when viewport is zoomed out
* fix: code review suggestions for code style
Co-authored-by: Daniel Dietzler <36593685+danieldietzler@users.noreply.github.com>
* fix: better variable naming
* chore: better documentation for the bottom breakpoint
---------
Co-authored-by: Daniel Dietzler <36593685+danieldietzler@users.noreply.github.com>
* added a section for the Traefik Proxy
* minimized the configs
* replaced config with a comment.
* Update docs/docs/administration/reverse-proxy.md
changed timeout values
Co-authored-by: dvbthien <89862334+dvbthien@users.noreply.github.com>
* changed timeouts back to 10 minutes
* fixed typo and set default writeTimeout 600s
Leaving it at 0 may be also bad practice
* removed whitespace
* run `npm run format -- --check -w`
---------
Co-authored-by: dvbthien <89862334+dvbthien@users.noreply.github.com>
* fix(web): local date time for buckets
* feat(web): improve UI/UX for setting pages
* search admin settings and icon
* clean up
* fix translation file
* Update web/src/routes/admin/system-settings/+page.svelte
Co-authored-by: Ben <45583362+ben-basten@users.noreply.github.com>
* Update web/src/lib/components/shared-components/settings/setting-accordion.svelte
Co-authored-by: Ben <45583362+ben-basten@users.noreply.github.com>
* better search bar on smaller screen
* lint
* template syntax
---------
Co-authored-by: Jason Rasmussen <jason@rasm.me>
Co-authored-by: Ben <45583362+ben-basten@users.noreply.github.com>
* fix: load original panorama if specified in user settings
* fixes after merge
* chore: cleanup
---------
Co-authored-by: Saschl <noreply@saschl.com>
Co-authored-by: Jason Rasmussen <jason@rasm.me>
* feat(web): move search options into a modal
* chore: revert adding focus ring
* minor styling
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* Fixing example docker compose
Change needed so the following statement included in the docs a bit below makes sense:
NOTE: We have to use the `/mnt/media/christmas-trip` path and not the `/mnt/nas/christmas-trip` path since all paths have to be what the Docker containers see.
* another fix
* feat(web): load original panorama image when zoomed in to 75% or above
* add checks that original 360 image is web compatible and better error handling
* fix web compatability check typing
* fix asset type
* feat: metadata in UserPreference
* feat: web metadata settings
* feat: web metadata settings
* fix: typo
* patch openapi
* fix: missing translation key
* new organization of preference strucutre
* feature settings on web
* localization
* added and used feature settings
* add default value to response dto
* patch openapi
* format en.json file
* implement helper method
* use tags preference logic
* Fix logic bug and add tests
* fix preference can be null in detail panel
* feat: tags
* fix: folder tree icons
* navigate to tag from detail panel
* delete tag
* Tag position and add tag button
* Tag asset in detail panel
* refactor form
* feat: navigate to tag page from clicking on a tag
* feat: delete tags from the tag page
* refactor: moving tag section in detail panel and add + tag button
* feat: tag asset action in detail panel
* refactor add tag form
* fdisable add tag button when there is no selection
* feat: tag bulk endpoint
* feat: tag colors
* chore: clean up
* chore: unit tests
* feat: write tags to sidecar
* Remove tag and auto focus on tag creation form opened
* chore: regenerate migration
* chore: linting
* add color picker to tag edit form
* fix: force render tags timeline on navigating back from asset viewer
* feat: read tags from keywords
* chore: clean up
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* Add proper colors to create album button
Allow creation of empty albums with names, or non-empty albums without names
* Add proper colors to create album button
Allow creation of empty albums with names, or non-empty albums without names
* Small changes
* Revert change
* Simplify logic
* lint
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* add root resource path '/' to mobile oauth scheme
* chore: add oauth-callback path
* add root resource path '/' to mobile oauth scheme
* chore: add oauth-callback path
* fix: make sure there are three forward slash in callback URL
---------
Co-authored-by: Jason Rasmussen <jason@rasm.me>
Co-authored-by: Alex <alex.tran1502@gmail.com>
* curating assets with albums to upload
* sorting for background backup
* background upload works
* transform fields string array to javascript array
* send json array
* generate sql
* refactor upload callback
* remove albums info from upload payload
* mechanism to create album on album selection
* album creation
* Sync to upload album
* Remove unused service
* unify name changes
* Add mechanism to sync uploaded assets to albums
* Put add to album operation after updating the UI state
* clean up
* background album sync
* add to album in background context
* remove add to album in callback
* refactor
* refactor
* refactor
* fix: make sure all selected albums are selected for building upload candidate
* clean up
* add manual sync button
* lint
* revert server changes
* pr feedback
* revert time filtering
* const
* sync album on manual upload
* linting
* pr feedback and proper time filtering
* wording
* feat: loading screen, initSDK on bootstrap, fix FOUC for theme
* pulsate immich logo, don't set localstorage
* Make it spin
* Rework error handling a bit
* Cleanup
* fix test
* rename, memoize
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* fix(web): align search filter behavior to show all camera models
* fix(mobile): align search filter behavior to clear camera model when make is set
* (mobile) correctly clear the model controller
* fix(mobile) re-add text controller to dropdown
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* Fix null name
* Fix null name and Fix button
* Remove extension correctly
* Refactoring the code and formatting
* formatting
* Fix for the extension name
* Squashed
* Change strategy - now pre-measure buckets offscreen, so don't need to worry about sub-bucket scroll preservation
* Reduce jank on scroll, delay DOM updates until after scroll
* css opt, log measure time
* Trickle out queue while scrolling, flush when stopped
* yay
* Cleanup cleanup...
* everybody...
* everywhere...
* Clean up cleanup!
* Everybody do their share
* CLEANUP!
* package-lock ?
* dynamic measure, todo
* Fix web test
* type lint
* fix e2e
* e2e test
* Better scrollbar
* Tuning, and more tunables
* Tunable tweaks, more tunables
* Scrollbar dots and viewport events
* lint
* Tweaked tunnables, use requestIdleCallback for garbage tasks, bug fixes
* New tunables, and don't update url by default
* Bug fixes
* Bug fix, with debug
* Fix flickr, fix graybox bug, reduced debug
* Refactor/cleanup
* Fix
* naming
* Final cleanup
* review comment
* Forgot to update this after naming change
* scrubber works, with debug
* cleanup
* Rename scrollbar to scrubber
* rename to
* left over rename and change to previous album bar
* bugfix addassets, comments
* missing destroy(), cleanup
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* feat: folder view poc
* fix(folder-view): ui modifications
* fix(folder-view): improves utility return types
* fix(folder-view): update getAssetsByOriginalPath
Endpoint now only returns direct children of the path instead of all images in all subfolders. Functions renamed and scoped to "folder", endpoints renamed
* fix(folder-view): improve typing
* fix(folder-view): replaces css with tailwind
* fix(folder-view): includes folders in main panel
* feat(folder-view): folder cache implementation
* fix(folder-view): can now search for absolute paths
* fix(folder-view): sets default sort to alphabetical by filename
* refactor/styling the browser view
* double click to navigate
* folder tree
* use correct side bar icon
* styling when selected
* correct open icon
* folder layout
* return assetReponseDto
* it's alive
* update new api
* more styling for folder tree
* use query params and path viewer
* use arrow up left for parent folder backward navigation
* use arrow up left for parent folder backward navigation
* encode URL
* handle long folder name
* refactor to the view controller
* remove unused code
* clear cache when logout
* cleaning up
* cleaning up web
* clean as new
* clean as new
* pr feedback + show asset name
* add tests
* add tests
* remove generated file
* lint
* revert docker-compose.dev file
* Update server/src/services/view.service.ts
Co-authored-by: Jason Rasmussen <jason@rasm.me>
* Update server/src/services/view.service.ts
Co-authored-by: Jason Rasmussen <jason@rasm.me>
---------
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
Co-authored-by: Jason Rasmussen <jason@rasm.me>
* refactor: stacks
* mobile: get it built
* chore: feedback
* fix: sync and duplicates
* mobile: remove old stack reference
* chore: add primary asset id
* revert change to asset entity
* mobile: refactor mobile api
* mobile: sync stack info after creating stack
* mobile: update timeline after deleting stack
* server: update asset updatedAt when stack is deleted
* mobile: simplify action
* mobile: rename to match dto property
* fix: web test
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* cropping, panel
* fix presets
* types
* prettier
* fix lint
* fix aspect ratio, performance optimization
* improved tool selection, removed placeholder
* fix the mouse's exit from canvas
* fix error
* the "save" button and change tracking
* lint, format
* the mini functionality of the save button
* fix aspect ratio
* hide editor button on mobiles
* strict equality
Co-authored-by: Michel Heusschen <59014050+michelheusschen@users.noreply.github.com>
* Use the dollar sign syntax for stores inside components
* unobtrusive grid lines, circles at the corners
* more correct image load, handleError
* more strict equality
* fix styles. unused and tailwind
Co-Authored-By: Michel Heusschen <59014050+michelheusschen@users.noreply.github.com>
* dont store isShowEditor
* if showEditor - hide navbar & shortcuts
* crop-canvas decomposition (danger)
I could have accidentally broken something.. but I checked the work and it seems ok.
* fix lint
* fix ts
* callback function as props
* correctly disabling shortcuts
* convenient canvas borders
• you can use the mouse to go beyond the boundaries and freely change the crop.
• the circles on the corners of the canvas are not cut off.
* -the editor button for video files, -save button
* hide editor btn if panoramic || gif || live
* corners instead of circles (preview), fix lint&format
* confirm close editor without save
* vertical aspect ratios
* recovery after merge. editor's closing shortcut
* fix format
* move from canvas to html elements
* fix changes detections
* rotation
* hide detail panel if showing editor
* fix aspect ratios near min size
* fix crop area when changing image size when rotate
* fix of fix
* better layout - grouping
https://github.com/user-attachments/assets/48f15172-9666-4588-acb6-3cb5eda873a8
* hide the button
* fix i18n, format
* hide button
* hide button v2
---------
Co-authored-by: Michel Heusschen <59014050+michelheusschen@users.noreply.github.com>
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* feat: add privacy step in the onboarding
* fix: remove console.log
* feat:Details the implications of enabling the map on the settings page
Added a link to the guide on customizing map styles as well
* feat: add map implication
* refactor: onboarding style
* fix: tile provider
* fix: remove long explanations
* chore: cleanup
---------
Co-authored-by: pcouy <contact@pierre-couy.dev>
Co-authored-by: Jason Rasmussen <jason@rasm.me>
* docs:Reword "Custom Map Style" guide
- Split setting a style.json in Immich and creating a style with
Maptiler
- Make it clearer that this is the way to change tile provider
---------
Co-authored-by: Jason Rasmussen <jason@rasm.me>
* Add Exif-Rating
* Integrate star rating as own component
* Add e2e tests for rating and validation
* Rename component and async handleChangeRating
* Display rating can be enabled in app settings
* Correct i18n reference
Co-authored-by: Michel Heusschen <59014050+michelheusschen@users.noreply.github.com>
* Star rating: change from slider to buttons
* Star rating for clarity
* Design updates.
* Renaming and code optimization
* chore: clean up
* chore: e2e formatting
* light mode border and default value
---------
Co-authored-by: Christoph Suter <christoph@suter-burri.ch>
Co-authored-by: Michel Heusschen <59014050+michelheusschen@users.noreply.github.com>
Co-authored-by: Mert <101130780+mertalev@users.noreply.github.com>
Co-authored-by: Jason Rasmussen <jrasm91@gmail.com>
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
* feat: keep screen active on backup
* show dialog
* improve dialog and use shared timer
* get rid of confirmation dialog
* fix timer logic
* fix: set timeout to 60 seconds
* fix: revert unwanted change
* fix: properly hide status bar
* remove unwanted change
* fix: properly restore status bar when waking up
* clean up
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* date time component
* rename to info_sheet
* simplify map info
* Edit datetime sheet
* fix janking when scroll on info sheet
* Location refactor
* refactor name
* Update date time after editing
* localize rebuild to smaller component
* restore advanced bottom sheet
* reassign EXIF back to local database
* remove print statements
* feat(web): Add stacking option to deduplication utilities
* Update web/src/lib/components/utilities-page/duplicates/duplicates-compare-control.svelte
Co-authored-by: Alex <alex.tran1502@gmail.com>
* Fix prettier
* Draft for server side modifications. Endpoint for stacks (PUT,DELETE)
* Fix error
* Disable stakc button if less or more than one asset selected
* Remove unnecesarry log
* Revert to first commit
* Further Revert
* Actually Revert to Origin
* Only one stack button
* Update +page.svelte
* Fix optional arguments
* Fix Prettier
* Fix Linting
* Add stack information to asset view
* clean up
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* chore(server): remover get person asset limit
* sql
* remove getPersonAsset endpoint
* remove getPersonAsset endpoint
* use search endpoint to get people
* fix: server test
* mobile linter
* fix: server test
* remove debuglog
* deprecated endpoint
* change page size on mobile
* revert max size
* fix test
* feat(mobile): add support for material themes
Added support for custom theming and updated all elements accordingly.
* fix(mobile): Restored immich brand colors to default theme
* fix(mobile): make ListTile titles bold in settings main page
* feat(mobile): update bottom nav and appbar colors
* small tweaks
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* Allow submission of null country
* Update searchAssetBuilder to handle nulls
andWhere({country:null}) produces `"exifInfo"."country" = NULL`. We want
`"exifInfo"."country" IS NULL`, so we have to treat NULL as a special
case
* Allow null country in frontend
* Make the query code a bit more straightforward
* Remove unused brackets import
* Remove log message
* Don't change whitespace for no reason
* Fix prettier style issue
* Update search.dto.ts validators per @jrasm91's recommendation
* Update api types
* Combine null country and state into one guard clause
* chore: clean up
* chore: add e2e for null/empty city, state, country search
* refactor: server returns suggestion for null values
* chore: clean up
---------
Co-authored-by: Jason Rasmussen <jrasm91@gmail.com>
Co-authored-by: Alex Tran <alex.tran1502@gmail.com>
Co-authored-by: Jason Rasmussen <jason@rasm.me>
* update export code
* add uuid glob, sort model names
* add new models to ml, sort names
* add new models to server, sort by dims and name
* typo in name
* update export dependencies
* onnx save function
* format
* fix(mobile): refactor splash screen to not require online connection
* chore: bump flutter sdk path for vscode
* refactor: authentication provider always try network calls and only fail if 401 or no local user
* lint
* fix: revert change to lookup serverendpoint from store the isar store implementation is very broken
* fix: clear serverUrl and serverEndpoint on logout, and await logout call
* refactor: remove unneeded extra conditions in splash screen useEffect
* revert change to remove serverEndpoint on logging out
* pr feedback
---------
Co-authored-by: Zack Pollard <zackpollard@ymail.com>
* attempt to use fqdn for og:image
opengraph image specifies that the url contains http or https, thus
implying a fqdn.
this change uses the external domain from the server config to attempt
to make the og:image have both the existing path to the thumbnail along
with the desired domain
if the server setting is empty, the old behavior will persist
please note, some og implementations do work with relative paths, so not
all og image checkers may still pass, but not all implementations have
this fallback and thus will not find the image otherwise
* tests and ssr for og:image value as fqdn
* formatting
* fix test
* formatting
* formatting
* fix tests
getConfig was requiring authentication. using already initiated global stores instead
* load config in shared link service itself
* join host and pathname/params safely
* use origin instead of host for full domain string
also fixes lint and address the imageURL type which is optional
* chore: clean up
---------
Co-authored-by: eleith <eleith@lemon.localdomain>
Co-authored-by: eleith <online-github@eleith.com>
Co-authored-by: Jason Rasmussen <jason@rasm.me>
* Added Crop Feature
* Using LayoutBuilder Fix
* Using Immich Colors
* Using Immich Text Theme
* Chnaging dynamic datatype to nullable
* Fix for the retrivel of the image from the cropscreen
* Using Hooks State
* Small edits
* Finals edits
* Saving to the mobile
* Commented final code
* Commented final code
* Comments and AutoRoute
* Fix AutoRoute Final
* Naming tools and Action when made no edits
* Updating timeline after edit
* chore: lint
* format
* Light Mode Compatible
* fix duplicate page name
* Fix Routing
* Hiding the Button
* lint
* remove unused code
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
* duplicate page assign other shortcut keys, add 'open image' shortcut
* add shortcut info page to duplicates with own list of keys
* edit translations, add translationkeys
* format fix
* remove typo
---------
Co-authored-by: Zack Pollard <zackpollard@ymail.com>
Co-authored-by: Alex <alex.tran1502@gmail.com>
* feat(web): search bar keyboard accessibility
* fix: adjust aria attributes
* fix: safari announcing the correct option count
* minor adjustments
- CircleIconButton disabled cursor
- more generic selection handler
* fix: more subtle border color in dark mode
---------
Co-authored-by: Alex <alex.tran1502@gmail.com>
With this commit I wanted to complete the react-mail
structure by properly define the templates styles by
including tailwind css framework.
The framework is extended by both react-mail and
tailwindcss-preset-email. Those packages help the rendering
for various email clients.
If in future there is the necessity to target specific mail
clients the package `tailwindcss-email-variants` and
`tailwindcss-mso` can help too. The latter has some
workarounds for the Ms Outlook that is still lacking
a lot of the CSS3 funcitonality.
to target
Signed-off-by: hitech95 <nicveronese@gmail.com>
* fix(web): alt text translation for non-English languages
* fix: refactor to use full translation key names
* fix: calling the translation function directly
* feat(server): allow server license to activate a user
* feat(web): send server+client licenses to user activation when non-admin
* chore(server): update test to allow server license to activate user
* fix(web): correctly load user to determine where to save license
* feat: people infinite scroll
* add infinite scroll to show & hide modal
* update unit tests
* show total people count instead of currently loaded
* update personsearchdto
* Rename the Menu Entry "admin.map_settings" to "admin.map_gps_settings"
Explanation:
The main menu item is called Map & GPS-Settings. The sub-item below it is also called.
It would be correct:
Main item:
Map & GPS-Settings
Sub-item 1:
Map settings
* Update en.json
* chore: formatting
---------
Co-authored-by: Jason Rasmussen <jrasm91@gmail.com>
title: Licensing announcement - Purchase a license to support Immich
authors: [alextran]
tags: [update, announcement, FUTO]
date: 2024-07-18T00:00
---
Hello everybody,
Firstly, on behalf of the Immich team, I'd like to thank everybody for your continuous support of Immich since the very first day! Your contributions, encouragement, and community engagement have helped bring Immich to its current state. The team and I are forever grateful for that.
Since our [last announcement of the core team joining FUTO to work on Immich full-time](https://immich.app/blog/2024/immich-core-team-goes-fulltime), one of the goals of our new position is to foster a healthy relationship between the developers and the users. We believe that this enables us to create great software, establish transparent policies and build trust.
We want to build a great software application that brings value to you and your loved ones' lives. We are not using you as a product, i.e., selling or tracking your data. We are not putting annoying ads into our software. We respect your privacy. We want to be compensated for the hard work we put in to build Immich for you.
With those notes, we have enabled a way for you to financially support the continued development of Immich, ensuring the software can move forward and will be maintained, by offering a lifetime license of the software. We think if you like and use software, you should pay for it, but _we're never going to force anyone to pay or try to limit Immich for those who don't._
There are two types of license that you can choose to purchase: **Server License** and **Individual License**.
### Server License
This is a lifetime license costing **$99.99**. The license is applied to the whole server. You and all users that use your server are licensed.
### Individual License
This is a lifetime license costing **$24.99**. The license is applied to a single user, and can be used on any server they choose to connect to.
Thank you again for your support, this will help create a strong foundation and stability for the Immich team to continue developing and maintaining the project that you love to use.
@@ -52,14 +52,25 @@ On iOS (iPhone and iPad), the operating system determines if a particular app ca
- Disable Background App Refresh for apps that don't need background tasks to run. This will reduce the competition for background task invocation for Immich.
- Use the Immich app more often.
### Why are features not working with a self-signed cert or mTLS?
Due to limitations in the upstream app/video library, using a self-signed TLS certificate or mutual TLS may break video playback or asset upload (both foreground and/or background).
We recommend using a real SSL certificate from a free provider, for example [Let's Encrypt](https://letsencrypt.org/).
---
## Assets
### Does Immich change the file?
No, Immich does not touch the original file under any circumstances,
all edited metadata are saved in the companion sidecar file and the database.
No, Immich does not modify the original files.
All edited metadata is saved in companion `.xmp` sidecar files and the database.
However, Immich will delete original files that have been trashed when the trash is emptied in the Immich UI.
### Why do my file names appear as a random string in the file manager?
When Storage Template is off (default) Immich saves the file names in a random string (also known as random UUIDs) to prevent duplicate file names. To retrieve the original file names, you must enable the Storage Template and then run the STORAGE TEMPLATE MIGRATION job.
It is recommended to read about [Storage Template](https://immich.app/docs/administration/storage-template) before activation.
### Can I add my existing photo library?
@@ -157,17 +168,30 @@ We haven't implemented an official mechanism for creating albums from external l
Duplicate checking only exists for upload libraries, using the file hash. Furthermore, duplicate checking is not global, but _per library_. Therefore, a situation where the same file appears twice in the timeline is possible, especially for external libraries.
### Why are my edits to files not being saved in read-only external libraries?
Images in read-write external libraries (the default) can be edited as normal.
In read-only libraries (`:ro` in the `docker-compose.yml`), Immich is unable to create the `.xmp` sidecar files to store edited file metadata.
For this reason, the metadata (timestamp, location, description, star rating, etc.) cannot be edited for files in read-only external libraries.
### How are deletions of files handled in external libraries?
Immich will attempt to delete original files that have been trashed when the trash is emptied.
In read-write external libraries (the default), Immich will delete the original file.
In read-only libraries (`:ro` in the `docker-compose.yml`), files can still be trashed in the UI.
However, when the trash is emptied, the files will re-appear in the main timeline since Immich is unable to delete the original file.
---
## Machine Learning
### How does smart search work?
Immich uses CLIP models. For more information about CLIP and its capabilities, read about it [here](https://openai.com/research/clip).
Immich uses CLIP models. An ML model converts each image to an "embedding", which is essentially a string of numbers that semantically encodes what is in the image. The same is done for the text that you enter when you do a search, and that text embedding is then compared with those of the images to find similar ones. As such, there are no "tags", "labels", or "descriptions" generated that you can look at. For more information about CLIP and its capabilities, read about it [here](https://openai.com/research/clip).
### How does facial recognition work?
For face detection and recognition, Immich uses [InsightFace models](https://github.com/deepinsight/insightface/tree/master/model_zoo).
See [How Facial Recognition Works](/docs/features/facial-recognition#How-Facial-Recognition-Works) for details.
### How can I disable machine learning?
@@ -181,19 +205,15 @@ However, disabling all jobs will not disable the machine learning service itself
### I'm getting errors about models being corrupt or failing to download. What do I do?
You can delete the model cache volume, where models are downloaded. This will give the service a clean environment to download the model again. If models are failing to download entirely, you can manually download them from [Huggingface][huggingface] and place them in the cache folder.
You can delete the model cache volume, where models are downloaded. This will give the service a clean environment to download the model again. If models are failing to download entirely, you can manually download them from [Hugging Face][huggingface] and place them in the cache folder.
### Can I use a custom CLIP model?
No, this is not supported. Only models listed in the [Huggingface][huggingface] page are compatible. Feel free to make a feature request if there's a model not listed here that you think should be added.
No, this is not supported. Only models listed in the [Hugging Face][huggingface] page are compatible. Feel free to make a feature request if there's a model not listed here that you think should be added.
### I want to be able to search in other languages besides English. How can I do that?
You can change to a multilingual model listed [here](https://huggingface.co/collections/immich-app/multilingual-clip-654eb08c2382f591eeb8c2a7) by going to Administration > Machine Learning Settings > Smart Search and replacing the name of the model. Be sure to re-run Smart Search on all assets after this change. You can then search in over 100 languages.
:::note
Feel free to make a feature request if there's a model you want to use that isn't in [Immich Huggingface list][huggingface].
:::
You can change to a multilingual CLIP model. See [here](/docs/features/smart-search#CLIP-model) for instructions.
### Does Immich support Facial Recognition for videos?
@@ -234,7 +254,7 @@ ls clip/ facial-recognition/
### Why is Immich slow on low-memory systems like the Raspberry Pi?
Immich optionally uses machine learning for several features. However, it can be too heavy to run on a Raspberry Pi. You can [mitigate](/docs/FAQ#can-i-lower-cpu-and-ram-usage) this or host Immich's machine-learning container on a [more powerful system](/docs/guides/remote-machine-learning), or [disable](/docs/FAQ#how-can-i-disable-machine-learning) machine learning entirely.
Immich optionally uses transcoding and machine learning for several features. However, it can be too heavy to run on a Raspberry Pi. You can [mitigate](/docs/FAQ#can-i-lower-cpu-and-ram-usage) this or host Immich's machine-learning container on a [more powerful system](/docs/guides/remote-machine-learning), or [disable](/docs/FAQ#how-can-i-disable-machine-learning) machine learning entirely.
### Can I lower CPU and RAM usage?
@@ -243,10 +263,12 @@ The initial backup is the most intensive due to the number of jobs running. The
- Lower the job concurrency for these jobs to 1.
- Under Settings > Transcoding Settings > Threads, set the number of threads to a low number like 1 or 2.
- Under Settings > Machine Learning Settings > Facial Recognition > Model Name, you can change the facial recognition model to `buffalo_s` instead of `buffalo_l`. The former is a smaller and faster model, albeit not as good.
- For facial recognition on new images to work properly, You must re-run the Face Detection job for all images after this.
- For facial recognition on new images to work properly, You must re-run the Face Detection job for all images after this.
- At the container level, you can [set resource constraints](/docs/FAQ#can-i-limit-cpu-and-ram-usage) to lower usage further.
- It's recommended to only apply these constraints _after_ taking some of the measures here for best performance.
- If these changes are not enough, see [below](/docs/FAQ#how-can-i-disable-machine-learning) for instructions on how to disable machine learning.
### Can I limit the amount of CPU and RAM usage?
### Can I limit CPU and RAM usage?
By default, a container has no resource constraints and can use as much of a given resource as the host's kernel scheduler allows. To limit this, you can add the following to the `docker-compose.yml` block of any containers that you want to have limited resources.
@@ -266,6 +288,8 @@ deploy:
</details>
For more details, you can look at the [original docker docs](https://docs.docker.com/config/containers/resource_constraints/) or use this [guide](https://www.baeldung.com/ops/docker-memory-limit).
Note that memory constraints work by terminating the container, so this can introduce instability if set too low.
### How can I boost machine learning speed?
:::note
@@ -275,21 +299,16 @@ This advice improves throughput, not latency. This is to say that it will make S
You can increase throughput by increasing the job concurrency for machine learning jobs (Smart Search, Face Detection). With higher concurrency, the host will work on more assets in parallel. You can do this by navigating to Administration > Settings > Job Settings and increasing concurrency as needed.
:::danger
On a normal machine, 2 or 3 concurrent jobs can probably max the CPU. Beyond this, note that storage speed and latency may quickly become the limiting factor; particularly when using HDDs.
On a normal machine, 2 or 3 concurrent jobs can probably max the CPU. Storage speed and latency can quickly become the limiting factor beyond this, particularly when using HDDs.
Do not exaggerate with the amount of jobs because you're probably thoroughly overloading the server.
The concurrency can be increased more comfortably with a GPU, but should still not be above 16 in most cases.
More details can be found [here](https://discord.com/channels/979116623879368755/994044917355663450/1174711719994605708)
Do not exaggerate with the job concurrency because you're probably thoroughly overloading the server.
:::
### Why is Immich using so much of my CPU?
### My server shows Server Status Offline | Version Unknown. What can I do?
When a large number of assets are uploaded to Immich, it makes sense that the CPU and RAM will be heavily used for machine learning work and creating image thumbnails.
Once this process is completed, the percentage of CPU usage will drop to around 3-5% usage
### My server shows Server Status Offline | Version Unknown what can I do?
You need to enable Websocket on your reverse proxy.
You need to enable WebSockets on your reverse proxy.
---
@@ -299,6 +318,12 @@ You need to enable Websocket on your reverse proxy.
Immich components are typically deployed using docker. To see logs for deployed docker containers, you can use the [Docker CLI](https://docs.docker.com/engine/reference/commandline/cli/), specifically the `docker logs` command. For examples, see [Docker Help](/docs/guides/docker-help.md).
### How can I reduce the log verbosity of Redis?
To decrease Redis logs, you can add the following line to the `redis:` section of the `docker-compose.yml`:
` command: redis-server --loglevel warning`
### How can I run Immich as a non-root user?
You can change the user in the container by setting the `user` argument in `docker-compose.yml` for each service.
@@ -308,7 +333,11 @@ You may need to add mount points or docker volumes for the following internal co
- `immich-machine-learning:/.cache`
- `redis:/data`
The non-root user/group needs read/write access to the volume mounts, including `UPLOAD_LOCATION`.
The non-root user/group needs read/write access to the volume mounts, including `UPLOAD_LOCATION` and `/cache` for machine-learning.
:::note Docker Compose Volumes
The Docker Compose top level volume element does not support non-root access, all of the above volumes must be local volume mounts.
:::
For a further hardened system, you can add the following block to every container except for `immich_postgres`.
@@ -21,6 +21,8 @@ The recommended way to backup and restore the Immich database is to use the `pg_
It is not recommended to directly backup the `DB_DATA_LOCATION` folder. Doing so while the database is running can lead to a corrupted backup that cannot be restored.
docker compose up -d # Start remainder of Immich apps
```
</TabItem>
@@ -68,6 +74,8 @@ Note that for the database restore to proceed properly, it requires a completely
Some deployment methods make it difficult to start the database without also starting the server or microservices. In these cases, you may set the environmental variable `DB_SKIP_MIGRATIONS=true` before starting the services. This will prevent the server from running migrations that interfere with the restore process. Note that both the server and microservices must have this variable set to prevent the migrations from running. Be sure to remove this variable and restart the services after the database is restored.
:::
### Automatic Database Backups
The database dumps can also be automated (using [this image](https://github.com/prodrigestivill/docker-postgres-backup-local)) by editing the docker compose file to match the following:
```yaml
@@ -97,6 +105,7 @@ services:
Then you can restore with the same command but pointed at the latest dump.
```bash title='Automated Restore'
# Be sure to check the username if you changed it from default
@@ -149,9 +158,21 @@ for more info read the [release notes](https://github.com/immich-app/immich/rele
- Preview images (small thumbnails and large previews) for each asset and thumbnails for recognized faces.
- Stored in `UPLOAD_LOCATION/thumbs/<userID>`.
- **Encoded Assets:**
- Videos that have been re-encoded from the original for wider compatibility. The original is not removed.
- Stored in `UPLOAD_LOCATION/encoded-video/<userID>`.
- **Postgres**
- The Immich database containing all the information to allow the system to function properly.
**Note:** This folder will only appear to users who have made the changes mentioned in [v1.102.0](https://github.com/immich-app/immich/discussions/8930) (an optional, non-mandatory change) or who started with this version.
- Stored in `DB_DATA_LOCATION`.
:::danger
A backup of this folder does not constitute a backup of your database!
Follow the instructions listed [here](/docs/administration/backup-and-restore#database) to learn how to perform a proper backup.
@@ -187,8 +208,19 @@ When you turn off the storage template engine, it will leave the assets in `UPLO
- Files uploaded through mobile apps.
- Temporarily located in `UPLOAD_LOCATION/upload/<userID>`.
- Transferred to `UPLOAD_LOCATION/library/<userID>` upon successful upload.
- **Postgres**
- The Immich database containing all the information to allow the system to function properly.
**Note:** This folder will only appear to users who have made the changes mentioned in [v1.102.0](https://github.com/immich-app/immich/discussions/8930) (an optional, non-mandatory change) or who started with this version.
- Stored in `DB_DATA_LOCATION`.
:::danger
A backup of this folder does not constitute a backup of your database!
Follow the instructions listed [here](/docs/administration/backup-and-restore#database) to learn how to perform a proper backup.
@@ -22,7 +22,7 @@ Copy the entire `immich-server` block as a new service and make the following ch
- container_name: immich_server
...
- ports:
- - 2283:3001
- - 2283:2283
+ immich-microservices:
+ container_name: immich_microservices
```
@@ -52,4 +52,4 @@ Additionally, some jobs run on a schedule, which is every night at midnight. Thi
Storage Migration job can be run after changing the [Storage Template](/docs/administration/storage-template.mdx), in order to apply the change to the existing library.
This page contains details about using OAuth in Immich.
:::tip
Unable to set `app.immich:/` as a valid redirect URI? See [Mobile Redirect URI](#mobile-redirect-uri) for an alternative solution.
Unable to set `app.immich:///oauth-callback` as a valid redirect URI? See [Mobile Redirect URI](#mobile-redirect-uri) for an alternative solution.
:::
## Overview
@@ -11,7 +11,7 @@ Unable to set `app.immich:/` as a valid redirect URI? See [Mobile Redirect URI](
Immich supports 3rd party authentication via [OpenID Connect][oidc] (OIDC), an identity layer built on top of OAuth2. OIDC is supported by most identity providers, including:
@@ -30,7 +30,7 @@ Before enabling OAuth in Immich, a new client application needs to be configured
The **Sign-in redirect URIs** should include:
-`app.immich:/` - for logging in with OAuth from the [Mobile App](/docs/features/mobile-app.mdx)
-`app.immich:///oauth-callback` - for logging in with OAuth from the [Mobile App](/docs/features/mobile-app.mdx)
-`http://DOMAIN:PORT/auth/login` - for logging in with OAuth from the Web Client
-`http://DOMAIN:PORT/user-settings` - for manually linking OAuth in the Web Client
@@ -38,7 +38,7 @@ Before enabling OAuth in Immich, a new client application needs to be configured
Mobile
-`app.immich:/` (You **MUST** include this for iOS and Android mobile apps to work properly)
-`app.immich:///oauth-callback` (You **MUST** include this for iOS and Android mobile apps to work properly)
Localhost
@@ -96,16 +96,16 @@ When Auto Launch is enabled, the login page will automatically redirect the user
## Mobile Redirect URI
The redirect URI for the mobile app is `app.immich:/`, which is a [Custom Scheme](https://developer.apple.com/documentation/xcode/defining-a-custom-url-scheme-for-your-app). If this custom scheme is an invalid redirect URI for your OAuth Provider, you can work around this by doing the following:
The redirect URI for the mobile app is `app.immich:///oauth-callback`, which is a [Custom Scheme](https://developer.apple.com/documentation/xcode/defining-a-custom-url-scheme-for-your-app). If this custom scheme is an invalid redirect URI for your OAuth Provider, you can work around this by doing the following:
1. Configure an http(s) endpoint to forwards requests to `app.immich:/`
1. Configure an http(s) endpoint to forwards requests to `app.immich:///oauth-callback`
2. Whitelist the new endpoint as a valid redirect URI with your provider.
3. Specify the new endpoint as the `Mobile Redirect URI Override`, in the OAuth settings.
With these steps in place, you should be able to use OAuth from the [Mobile App](/docs/features/mobile-app.mdx) without a custom scheme redirect URI.
:::info
Immich has a route (`/api/oauth/mobile-redirect`) that is already configured to forward requests to `app.immich:/`, and can be used for step 1.
Immich has a route (`/api/oauth/mobile-redirect`) that is already configured to forward requests to `app.immich:///oauth-callback`, and can be used for step 1.
@@ -13,9 +13,9 @@ Running with a pre-existing Postgres server can unlock powerful administrative f
You must install pgvecto.rs into your instance of Postgres using their [instructions][vectors-install]. After installation, add `shared_preload_libraries = 'vectors.so'` to your `postgresql.conf`. If you already have some `shared_preload_libraries` set, you can separate each extension with a comma. For example, `shared_preload_libraries = 'pg_stat_statements, vectors.so'`.
:::note
Immich is known to work with Postgres versions 14, 15, and 16. Earlier versions are unsupported.
Immich is known to work with Postgres versions 14, 15, and 16. Earlier versions are unsupported. Postgres 17 is nominally compatible, but pgvecto.rs does not have prebuilt images or packages for it as of writing.
Make sure the installed version of pgvecto.rs is compatible with your version of Immich. For example, if your Immich version uses the dedicated database image `tensorchord/pgvecto-rs:pg14-v0.2.1`, you must install pgvecto.rs `>= 0.2.1, < 0.3.0`.
Make sure the installed version of pgvecto.rs is compatible with your version of Immich. The current accepted range for pgvecto.rs is `>= 0.2.0, < 0.4.0`.
@@ -64,3 +64,43 @@ Below is an example config for Apache2 site configuration.
ProxyPreserveHostOn
</VirtualHost>
```
### Traefik Proxy example config
The example below is for Traefik version 3.
The most important is to increase the `respondingTimeouts` of the entrypoint used by immich. In this example of entrypoint `websecure` for port `443`. Per default it's set to 60s which leeds to videos stop uploading after 1 minute (Error Code 499). With this config it will fail after 10 minutes which is in most cases enough. Increase it if needed.
`traefik.yaml`
```yaml
[...]
entryPoints:
websecure:
address::443
# this section needs to be added
transport:
respondingTimeouts:
readTimeout:600s
idleTimeout:600s
writeTimeout:600s
```
The second part is in the `docker-compose.yml` file where immich is in. Add the Traefik specific labels like in the example.
`docker-compose.yml`
```yaml
services:
immich-server:
[...]
labels:
traefik.enable:true
# increase readingTimeouts for the entrypoint used here
The folders considered for these checks include: `upload/`, `library/`, `thumbs/`, `encoded-video/`, `profile/`
:::
When Immich starts, it performs a series of checks in order to validate that it can read and write files to the volume mounts used by the storage system. If it cannot perform all the required operations, it will fail to start. The checks include:
- Creating an initial hidden file (`.immich`) in each folder
- Reading a hidden file (`.immich`) in each folder
- Overwriting a hidden file (`.immich`) in each folder
The checks are designed to catch the following situations:
- Incorrect permissions (cannot read/write files)
- Missing volume mount (`.immich` files should exist, but are missing)
### Common issues
:::note
`.immich` files serve as markers and help keep track of volume mounts being used by Immich. Except for the situations listed below, they should never be manually created or deleted.
:::
#### Missing `.immich` files
```
Verifying system mount folder checks (enabled=true)
...
ENOENT: no such file or directory, open 'upload/encoded-video/.immich'
```
The above error messages show that the server has previously (successfully) written `.immich` files to each folder, but now does not detect them. This could be because any of the following:
- Permission error - unable to read the file, but it exists
- File does not exist - volume mount has changed and should be corrected
- File does not exist - user manually deleted it and should be manually re-created (`touch .immich`)
- File does not exist - user restored from a backup, but did not restore each folder (user should restore all folders or manually create `.immich` in any missing folders)
### Ignoring the checks
The checks are designed to catch common problems that we have seen users have in the past, but if you want to disable them you can set the following environment variable:
@@ -104,7 +104,7 @@ You can choose to disable a certain type of machine learning, for example smart
### Smart Search
The smart search settings are designed to allow the search tool to be used using [CLIP](https://openai.com/research/clip) models that [can be changed](/docs/FAQ#can-i-use-a-custom-clip-model), different models will necessarily give better results but may consume more processing power, when changing a model it is mandatory to re-run the
The [smart search](/docs/features/smart-search) settings are designed to allow the search tool to be used using [CLIP](https://openai.com/research/clip) models that [can be changed](/docs/FAQ#can-i-use-a-custom-clip-model), different models will necessarily give better results but may consume more processing power, when changing a model it is mandatory to re-run the
Smart Search job on all images to fully apply the change.
:::info Internet connection
@@ -113,15 +113,23 @@ After downloading, there is no need for Immich to connect to the network
Unless version checking has been enabled in the settings.
:::
### Duplicate Detection
Use CLIP embeddings to find likely duplicates. The maximum detection distance can be configured in order to improve / reduce the level of accuracy.
- **Maximum detection distance -** Maximum distance between two images to consider them duplicates, ranging from 0.001-0.1. Higher values will detect more duplicates, but may result in false positives.
### Facial Recognition
Under these settings, you can change the facial recognition settings
Editable settings:
- **Facial Recognition Model -** Models are listed in descending order of size. Larger models are slower and use more memory, but produce better results. Note that you must re-run the Face Detection job for all images upon changing a model.
- **Min Detection Score -** Minimum confidence score for a face to be detected from 0-1. Lower values will detect more faces but may result in false positives.
- **Max Recognition Distance -** Maximum distance between two faces to be considered the same person, ranging from 0-2. Lowering this can prevent labeling two people as the same person, while raising it can prevent labeling the same person as two different people. Note that it is easier to merge two people than to split one person in two, so err on the side of a lower threshold when possible.
- **Min Recognized Faces -** The minimum number of recognized faces for a person to be created (AKA: Core face). Increasing this makes Facial Recognition more precise at the cost of increasing the chance that a face is not assigned to a person.
- **Facial Recognition Model**
- **Min Detection Score**
- **Max Recognition Distance**
- **Min Recognized Faces**
You can learn more about these options on the [Facial Recognition page](/docs/features/facial-recognition#how-face-detection-works)
:::info
When changing the values in Min Detection Score, Max Recognition Distance, and Min Recognized Faces.
@@ -153,7 +161,7 @@ SMTP server setup, for user creation notifications, new albums, etc. More inform
### External Domain
When set, will override the domain name used when viewing and copying a shared link.
Overrides the domain name in shared links and email notifications. The URL should not include a trailing slash.
import AppArchitecture from './img/app-architecture.png';
import MobileArchitecture from './img/immich_mobile_architecture.svg';
# Architecture
@@ -28,7 +29,14 @@ All three clients use [OpenAPI](./open-api.md) to auto-generate rest clients for
### Mobile App
The mobile app is written in [Flutter](https://flutter.dev/). It uses [Isar Database](https://isar.dev/) for a local database and [Riverpod](https://riverpod.dev/) for state management.
The mobile app is written in [Dart](https://dart.dev/) using [Flutter](https://flutter.dev/). Below is an architecture overview:
The diagrams shows the target architecture, the current state of the code-base is not always following the architecture yet. New code and contributions should follow this architecture.
Currently, it uses [Isar Database](https://isar.dev/) for a local database and [Riverpod](https://riverpod.dev/) for state management (providers).
Entities and Models are the two types of data classes used. While entities are stored in the on-device database, models are ephemeral and only kept in memory.
The Repositories should be the only place where other data classes are used internally (such as OpenAPI DTOs). However, their interfaces must not use foreign data classes!
Face detection sends the generated preview image to the machine learning service for processing. The service checks if it has the relevant model downloaded and downloads it if not. The image is decoded, pre-processed and passed to the face detection model (with hardware acceleration if configured). The bounding boxes and scores outputted from this model are used to crop and preprocess the image once again to be passed to a facial recognition model (also accelerated if configured). The embeddings from the recognition model, together with the bounding boxes and scores from the face detection model, are then sent back to the server to be added to the database. The embeddings in particular are indexed so they can be searched quickly during facial recognition clustering.
## How Facial Recognition Works
The facial recognition algorithm we use is derived from [DBSCAN](https://www.youtube.com/watch?v=RDZUdRSDOok), a popular clustering algorithm. It essentially treats each detected face as a point in a graph and aims to group points that are close to each other.
:::note
An important concept is whether something is a _core point_. A core point has a minimum number of points around it within a certain distance. A non-core point can only be assigned to a cluster if it can reach a core point; a non-core point can't be used to extend a cluster even if it's part of one. In Immich, the _Minimum Recognized Faces_ setting controls the threshold to be considered a core point.
:::
For each face, it looks around it to find other faces within a certain distance. Faces within this distance are considered similar, so it then checks if any of these faces are associated with a person.
If there is an existing person, it assigns the person of the most similar face to the face being processed.
If there is none, then it has to determine something from the DBSCAN algorithm: whether the face is a _core point_. If there are a certain number of similar faces (by default 3, including the face being considered), then this face is a core point. A new person is created for this face and the face is assigned to it. When other faces are processed, if they're similar to this face, they'll see that it has an associated person and can be assigned to that person.
However, if there aren't enough similar faces, no new person will be created. Instead, the face will wait for all the other faces to be processed to see if any matches that previously didn't have an associated person now do. If they do, then the face will be assigned to that person. If not, this face will be considered an outlier, such as a stranger in the background of an image.
The algorithm has some subtle differences compared to DBSCAN:
- DBSCAN doesn't have a concept of incremental clustering: it clusters all points at once. In contrast, facial recognition has to evolve as more assets are added without re-clustering everything each time.
- The algorithm described above works within a set of queued assets. Once these faces are processed and a new round of faces are detected, the behavior will not be the same as traditional DBSCAN since it preserves the clusters (people) generated from the previous round.
- Facial recognition tries to wait for face detection and thumbnail generation to complete before starting for this reason: the larger the set of faces in the queue, the better the results will be.
- Re-running facial recognition on all assets afterwards does behave like DBSCAN, however.
- DBSCAN is designed for range-based searches (i.e. points within a distance), but high-dimensional vector indices are generally optimized for getting the closest K results. The recognition algorithm doesn't try to get _all_ similar faces within a distance for performance reasons. Instead, it searches for a small number of matches for each face. The end result should be very similar if not identical, but with possibly different performance characteristics.
- Because of this, part of the recognition process is handled during a nightly job to ensure that unassigned faces with potential matches can be recognized.
:::tip
If you didn't import your assets at once or if the server was able to process jobs faster than you could upload them, it's possible that the clustering was suboptimal. If you haven't put effort into the current results, it may be worth re-running facial recognition on all assets for the best starting point. If it's too late for that, you can also manually assign a selection of unassigned faces and queue _Missing_ for Facial Recognition to help it learn and assign more faces automatically.
:::
## Configuration
Navigating to Administration > Settings > Machine Learning Settings > Facial Recognition will show the options available.
:::tip
It's better to only tweak the parameters here than to set them to something very different unless you're ready to test a variety of options. If you do need to set a parameter to a strict setting, relaxing other settings can be a good option to compensate, and vice versa.
:::
### Facial recognition model
There are a few different models available; the default is typically considered the best. On more constrained systems where the default is too intensive, you can choose a smaller model instead.
### Minimum detection score
This setting affects whether a result from the face detecton model is filtered out as a false positive. It may seem tempting to set this low to detect more faces, but it can lead to false positives that are difficult to deal with and can harm facial recognition. It is strongly recommended not to go below 0.5 for this setting. Setting it to a very high number like 0.9 is also not recommended: the default is already biased toward precision, so a threshold that high leads to many undetected faces.
After changing this setting, it will only apply to new face detection jobs. To apply the new setting to all assets, you need to re-run face detection for all assets.
### Maximum recognition distance
The distance threshold described in How Facial Recognition Works. The default works well for most people, but it may be worth lowering it if the library has twins or otherwise very similar looking people. A threshold that's too low just means needing to merge duplicate people after facial recognition, whereas a threshold too high can produce unsalvageable results. It is strongly recommended not to go below 0.3 or above 0.7.
### Minimum recognized faces
The core point threshold described in How Facial Recognition Works. This setting has a few implications. First, it takes effect immediately in that people with fewer faces than this are hidden from view. Secondly, it makes clustering more robust as it prevents loosely-related faces from being linked to each other by requiring a certain level of density.
Increasing this setting is a good idea if you increase the recognition distance or reduce the minimum detection score. Setting it to 1 effectively disables the concept of core points, but can be an option if you prefer a more hands-on approach.
- Only RK3588 supports hardware tonemapping, other SoCs use slower software tonemapping while still using hardware encoding.
- Tonemapping requires `/usr/lib/aarch64-linux-gnu/libmali.so.1` to be present on your host system. Install [`libmali-valhall-g610-g6p0-gbm`][libmali-rockchip] and modify the [`hwaccel.transcoding.yml`][hw-file] file:
- Tonemapping requires `/usr/lib/aarch64-linux-gnu/libmali.so.1` to be present on your host system. Install the [`libmali`][libmali-rockchip] release that corresponds to your Mali GPU (`libmali-valhall-g610-g13p0-gbm` on RK3588) and modify the [`hwaccel.transcoding.yml`][hw-file] file:
- under `rkmpp` uncomment the 3 lines required for OpenCL tonemapping by removing the `#` symbol at the beginning of each line
-`- /dev/mali0:/dev/mali0`
-`- /etc/OpenCL:/etc/OpenCL:ro`
@@ -89,16 +89,7 @@ immich-server:
devices:
- /dev/dri:/dev/dri
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
ports:
- 2283:3001
depends_on:
- redis
- database
restart:always
...
```
Once this is done, you can continue to step 3 of "Basic Setup".
@@ -123,6 +114,7 @@ Once this is done, you can continue to step 3 of "Basic Setup".
- You may want to choose a slower preset than for software transcoding to maintain quality and efficiency
- While you can use VAAPI with NVIDIA and Intel devices, prefer the more specific APIs since they're more optimized for their respective devices
- You can confirm the device is being recognized and used by checking its utilization (via `nvtop` for NVIDIA, `intel_gpu_top` for Intel, etc.) when transcoding. A lack of error logs when transcoding also indicates that it's being used.
External libraries track assets stored in the filesystem outside of Immich. When the external library is scanned, Immich will load videos and photos from disk and create the corresponding assets. These assets will then be shown in the main timeline, and they will look and behave like any other asset, including viewing on the map, adding to albums, etc. Later, if a file is modified outside of Immich, you need to scan the library for the changes to show up.
Immich supports the creation of libraries which is a top-level asset container. Currently, there are two types of libraries: traditional upload libraries that can sync with a mobile device, and external libraries, that keeps up to date with files on disk. Libraries are different from albums in that an asset can belong to multiple albums but only one library, and deleting a library deletes all assets contained within. As of August 2023, this is a new feature and libraries have a lot of potential for future development beyond what is documented here. This document attempts to describe the current state of libraries.
If an external asset is deleted from disk, Immich will move it to trash on rescan. To restore the asset, you need to restore the original file. After 30 days the file will be removed from trash, and any changes to metadata within Immich will be lost.
## External Libraries
:::caution
External libraries tracks assets stored outside of Immich, i.e. in the file system. When the external library is scanned, Immich will read the metadata from the file and create an asset in the library for each image or video file. These items will then be shown in the main timeline, and they will look and behave like any other asset, including viewing on the map, adding to albums, etc.
If you add metadata to an external asset in any way (i.e. add it to an album or edit the description), that metadata is only stored inside Immich and will not be persisted to the external asset file. If you move an asset to another location within the library all such metadata will be lost upon rescan. This is because the asset is considered a new asset after the move. This is a known issue and will be fixed in a future release.
If a file is modified outside of Immich, the changes will not be reflected in immich until the library is scanned again. There are different ways to scan a library depending on the use case:
- Scan Library Files: This is the default scan method and also the quickest. It will scan all files in the library and add new files to the library. It will notice if any files are missing (see below) but not check existing assets
- Scan All Library Files: Same as above, but will check each existing asset to see if the modification time has changed. If it has, the asset will be updated. Since it has to check each asset, this is slower than Scan Library Files.
- Force Scan All Library Files: Same as above, but will read each asset from disk no matter the modification time. This is useful in some cases where an asset has been modified externally but the modification time has not changed. This is the slowest way to scan because it reads each asset from disk.
:::
:::caution
@@ -20,22 +16,6 @@ Due to aggressive caching it can take some time for a refreshed asset to appear
:::
In external libraries, the file path is used for duplicate detection. This means that if a file is moved to a different location, it will be added as a new asset. If the file is moved back to its original location, it will be added as a new asset. In contrast to upload libraries, two identical files can be uploaded if they are in different locations. This is a deliberate design choice to make Immich reflect the file system as closely as possible. Remember that duplication detection is only done within the same library, so if you have multiple external libraries, the same file can be added to multiple libraries.
:::caution
If you add assets from an external library to an album and then move the asset to another location within the library, the asset will be removed from the album upon rescan. This is because the asset is considered a new asset after the move. This is a known issue and will be fixed in a future release.
:::
### Deleted External Assets
Note: Either a manual or scheduled library scan must have been performed to identify offline assets before this process will work.
In all above scan methods, Immich will check if any files are missing. This can happen if files are deleted, or if they are on a storage location that is currently unavailable, like a network drive that is not mounted, or a USB drive that has been unplugged. In order to prevent accidental deletion of assets, Immich will not immediately delete an asset from the library if the file is missing. Instead, the asset will be internally marked as offline and will still be visible in the main timeline. If the file is moved back to its original location and the library is scanned again, the asset will be restored.
Finally, files can be deleted from Immich via the `Remove Offline Files` job. This job can be found by the three dots menu for the associated external storage that was configured under Administration > Libraries (the same location described at [create external libraries](#create-external-libraries)). When this job is run, any assets marked as offline will then be removed from Immich. Run this job whenever files have been deleted from the file system and you want to remove them from Immich.
### Import Paths
External libraries use import paths to determine which files to scan. Each library can have multiple import paths so that files from different locations can be added to the same library. Import paths are scanned recursively, and if a file is in multiple import paths, it will only be added once. Each import file must be a readable directory that exists on the filesystem; the import path dialog will alert you of any paths that are not accessible.
@@ -66,9 +46,13 @@ Some basic examples:
-`**/Raw/**` will exclude all files in any directory named `Raw`
-`**/*.{tif,jpg}` will exclude all files with the extension `.tif` or `.jpg`
Special characters such as @ should be escaped, for instance:
-`**/\@eadir/**` will exclude all files in any directory named `@eadir`
### Automatic watching (EXPERIMENTAL)
This feature - currently hidden in the config file - is considered experimental and for advanced users only. If enabled, it will allow automatic watching of the filesystem which means new assets are automatically imported to Immich without needing to rescan. Deleted assets are, as always, marked as offline and can be removed with the "Remove offline files" button.
This feature - currently hidden in the config file - is considered experimental and for advanced users only. If enabled, it will allow automatic watching of the filesystem which means new assets are automatically imported to Immich without needing to rescan.
If your photos are on a network drive, automatic file watching likely won't work. In that case, you will have to rely on a periodic library refresh to pull in your changes.
@@ -84,7 +68,7 @@ In rare cases, the library watcher can hang, preventing Immich from starting up.
### Nightly job
There is an automatic job that's run once a day and refreshes all modified files in all libraries as well as cleans up any libraries stuck in deletion.
There is an automatic scan job that is scheduled to run once a day. This job also cleans up any libraries stuck in deletion.
## Usage
@@ -107,18 +91,20 @@ The `immich-server` container will need access to the gallery. Modify your docke
@@ -32,6 +32,7 @@ You do not need to redo any machine learning jobs after enabling hardware accele
- Where and how you can get this file depends on device and vendor, but typically, the device vendor also supplies these
- The `hwaccel.ml.yml` file assumes the path to it is `/usr/lib/libmali.so`, so update accordingly if it is elsewhere
- The `hwaccel.ml.yml` file assumes an additional file `/lib/firmware/mali_csffw.bin`, so update accordingly if your device's driver does not require this file
- Optional: Configure your `.env` file, see [environment variables](/docs/install/environment-variables) for ARM NN specific settings
#### CUDA
@@ -52,6 +53,12 @@ You do not need to redo any machine learning jobs after enabling hardware accele
3. Still in `immich-machine-learning`, add one of -[armnn, cuda, openvino] to the `image` section's tag at the end of the line.
4. Redeploy the `immich-machine-learning` container with these updated settings.
### Confirming Device Usage
You can confirm the device is being recognized and used by checking its utilization. There are many tools to display this, such as `nvtop` for NVIDIA or Intel and `intel_gpu_top` for Intel.
You can also check the logs of the `immich-machine-learning` container. When a Smart Search or Face Detection job begins, or when you search with text in Immich, you should either see a log for `Available ORT providers` containing the relevant provider (e.g. `CUDAExecutionProvider` in the case of CUDA), or a `Loaded ANN model` log entry without errors in the case of ARM NN.
#### Single Compose File
Some platforms, including Unraid and Portainer, do not support multiple Compose files as of writing. As an alternative, you can "inline" the relevant contents of the [`hwaccel.ml.yml`][hw-file] file into the `immich-machine-learning` service directly.
@@ -94,9 +101,22 @@ immich-machine-learning:
Once this is done, you can redeploy the `immich-machine-learning` container.
:::info
You can confirm the device is being recognized and used by checking its utilization (via `nvtop` for CUDA, `intel_gpu_top` for OpenVINO, etc.). You can also enable debug logging by setting `IMMICH_LOG_LEVEL=debug` in the `.env` file and restarting the `immich-machine-learning` container. When a Smart Search or Face Detection job begins, you should see a log for `Available ORT providers` containing the relevant provider. In the case of ARM NN, the absence of a `Could not load ANN shared libraries` log entry means it loaded successfully.
:::
#### Multi-GPU
If you want to utilize multiple NVIDIA or Intel GPUs, you can set the `MACHINE_LEARNING_DEVICE_IDS` environmental variable to a comma-separated list of device IDs and set `MACHINE_LEARNING_WORKERS` to the number of listed devices. You can run a command such as `nvidia-smi -L` or `glxinfo -B` to see the currently available devices and their corresponding IDs.
For example, if you have devices 0 and 1, set the values as follows:
```
MACHINE_LEARNING_DEVICE_IDS=0,1
MACHINE_LEARNING_WORKERS=2
```
In this example, the machine learning service will spawn two workers, one of which will allocate models to device 0 and the other to device 1. Different requests will be processed by one worker or the other.
This approach can be used to simply specify a particular device as well. For example, setting `MACHINE_LEARNING_DEVICE_IDS=1` will ensure device 1 is always used instead of device 0.
Note that you should increase job concurrencies to increase overall utilization and more effectively distribute work across multiple GPUs. Additionally, each GPU must be able to load all models. It is not possible to distribute a single model to multiple GPUs that individually have insufficient VRAM, or to delegate a specific model to one GPU.
@@ -66,7 +66,7 @@ The provided file is just a starting point. There are a ton of ways to configure
After bringing down the containers with `docker compose down` and back up with `docker compose up -d`, a Prometheus instance will now collect metrics from the immich server and microservices containers. Note that we didn't need to expose any new ports for these containers - the communication is handled in the internal Docker network.
:::note
To see exactly what metrics are made available, you can additionally add `8081:8081` to the server container's ports and `8082:8081` to the microservices container's ports. Visiting the `/metrics` endpoint for these services will show the same raw data that Prometheus collects.
To see exactly what metrics are made available, you can additionally add `8081:8081` to the server container's ports and `8082:8082` to the microservices container's ports. Visiting the `/metrics` endpoint for these services will show the same raw data that Prometheus collects.
@@ -7,29 +7,30 @@ Immich uses Postgres as its search database for both metadata and smart search.
Smart search is powered by the [pgvecto.rs](https://github.com/tensorchord/pgvecto.rs) extension, utilizing machine learning models like [CLIP](https://openai.com/research/clip) to provide relevant search results. This allows for freeform searches without requiring specific keywords in the image or video metadata.
Archived photos are not included in search results by default. To include them, mark the checkbox in [advanced search filters](/docs/features/smart-search#advanced-search-filters).
:::tip Alternative CLIP Models
More powerful models can be used for more accurate search results. For more information, see the related [FAQ](/docs/FAQ#can-i-use-a-custom-clip-model).
:::
:::info
Smart Search is currently limited to 5,000 results for a single search on the web.
:::
## Advanced Search Filters
In addition, Immich offers advanced search functionality, allowing you to find specific content using customizable search filters. These filters include location, one or more faces, specific albums, and more. You can try out the search filters on the [Demo site](https://demo.immich.app).
Smart search features include:
The filters smart search allows you to search by include:
-Search for one or more faces (with or without context search).
-Search by Country or State or City or by all three.
-Search by camera make and model.
- Search by date range.
-Search by file name.
-Search by media types: image, video or all (**Note:** Image includes live images).
-Search by condition: not in any album or archive or Favorite or all conditions.
Navigating to `Administration > Settings > Machine Learning Settings > Smart Search` will show the options available.
### CLIP model
More powerful models can be used for more accurate search results, but are slower and can require more server resources. Check out the models [here][huggingface-clip] for more options!
[Multilingual models][huggingface-multilingual-clip] are also available so users can search in their native language. These models support over 100 languages; the `nllb` models in particular support 200.
:::note
Multilingual models are much slower and larger and perform slightly worse for English than English-only models. For this reason, only use them if you actually intend to search in a language besides English.
As a special case, the `ViT-H-14-quickgelu__dfn5b` and `ViT-H-14-378-quickgelu__dfn5b` models are excellent at many European languages despite not specifically being multilingual. They're very intensive regardless, however - especially the latter.
:::
Once you've chosen a model, change this setting to the name of the model you chose. Be sure to re-run Smart Search on all assets after this change.
:::note
Feel free to make a feature request if there's a model you want to use that we don't currently support.
After defining the locations for these files, we will edit the `docker-compose.yml` file accordingly and add the new variables to the `immich-server` and `immich-microservices` containers.
After defining the locations for these files, we will edit the `docker-compose.yml` file accordingly and add the new variables to the `immich-server` container.
# Create Custom Map Styles for Immich Using Maptiler
# Custom Map Styles
You may decide that you'd like to modify the style document which is used to draw the maps in Immich. This can be done easily using Maptiler, if you do not want to write an entire JSON document by hand.
You may decide that you'd like to modify the style document which is used to
draw the maps in Immich. In addition to visual customization, this also allows
you to pick your own map tile provider instead of the default one. The default
`style.json` for [light theme](https://github.com/immich-app/immich/tree/main/server/resources/style-light.json)
and [dark theme](https://github.com/immich-app/immich/blob/main/server/resources/style-dark.json)
can be used as a basis for creating your own style.
## Steps
There are several sources for already-made `style.json` map themes, as well as
online generators you can use.
1. In **Immich**, navigate to **Administration --> Settings --> Map & GPS Settings** and expand the **Map Settings** subsection.
2. Paste the link to your JSON style in either the **Light Style** or **Dark Style**. (You can add different styles which will help make the map style more appropriate depending on whether you set **Immich** to Light or Dark mode.)
3. Save your selections. Reload the map, and enjoy your custom map style!
## Use Maptiler to build a custom style
Customizing the map style can be done easily using Maptiler, if you do not want to write an entire JSON document by hand.
1. Create a free account at https://cloud.maptiler.com
2. Once logged in, you can either create a brand new map by clicking on **New Map**, selecting a starter map, and then clicking **Customize**, OR by selecting a **Standard Map** and customizing it from there.
@@ -11,6 +25,3 @@ You may decide that you'd like to modify the style document which is used to dra
5. Next, **Publish** your style using the **Publish** button at the top right. This will deploy it to production, which means it is able to be exposed over the Internet. Maptiler will present an interactive side-by-side map with the original and your changes prior to publication.<br/>
6. Maptiler will warn you that changing the map will change it across all apps using the map. Since no apps are using the map yet, this is okay.
7. Clicking on the name of your new map at the top left will bring you to the item's **details** page. From here, copy the link to the JSON style under **Use vector style**. This link will automatically contain your personal API key to Maptiler.
8. In **Immich**, navigate to **Administration --> Settings --> Map & GPS Settings** and expand the **Map Settings** subsection.
9. Paste the link to your JSON style in either the **Light Style** or **Dark Style**. (You can add different styles which will help make the map style more appropriate depending on whether you set **Immich** to Light or Dark mode.
10. Save your selections. Reload the map, and enjoy your custom map style!
@@ -6,36 +6,18 @@ in a directory on the same machine.
# Mount the directory into the containers.
Edit `docker-compose.yml` to add two new mount points in the section `immich-server:` under `volumes:`
Edit `docker-compose.yml` to add one or more new mount points in the section `immich-server:` under `volumes:`.
If you want Immich to be able to delete the images in the external library or add metadata ([XMP sidecars](/docs/features/xmp-sidecars)), remove `:ro` from the end of the mount point.
```diff
immich-server:
volumes:
+ - ${EXTERNAL_PATH}:/usr/src/app/external
- ${UPLOAD_LOCATION}:/usr/src/app/upload
+ - /home/user/photos1:/home/user/photos1:ro
+ - /mnt/photos2:/mnt/photos2:ro # you can delete this line if you only have one mount point, or you can add more lines if you have more than two
```
Edit `.env` to define `EXTERNAL_PATH`, substituting in the correct path for your computer:
```
EXTERNAL_PATH=<your-path-here>
```
On my computer, for example, I use this path:
```
EXTERNAL_PATH=/home/tenino/photos
```
:::info EXTERNAL_PATH design
The design choice to put the EXTERNAL_PATH into .env rather than put two copies of the absolute path in the yml file in order to make everything easier, so if you have two copies of the same path that have to be kept in sync, then someday later when you move the data, update only one of the paths, without everything will break mysteriously.
@@ -4,20 +4,20 @@ This page gives a few pointers on how to access your Immich instance from outsid
You can read the [full discussion in Discord](https://discord.com/channels/979116623879368755/1122615710846308484)
:::danger
Never forward port 2283 directly to the internet without additional configuration. This will expose the web interface via http to the internet, making you succeptible to [man in the middle](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) attacks.
Never forward port 2283 directly to the internet without additional configuration. This will expose the web interface via http to the internet, making you susceptible to [man in the middle](https://en.wikipedia.org/wiki/Man-in-the-middle_attack) attacks.
:::
## Option 1: VPN to home network
You may use a VPN service to open an encrypted connection to your Immich instance. OpenVPN and Wireguard are two popular VPN solutions. Here is a guide on setting up VPN access to your server - [Pihole documentation](https://docs.pi-hole.net/guides/vpn/wireguard/overview/)
### Pros:
### Pros
- Simple to set up and very secure.
- Single point of potential failure, i.e., the VPN software itself. Even if there is a zero-day vulnerability on Immich, you will not be at risk.
- Both Wireguard and OpenVPN are independently security-audited, so the risk of serious zero-day exploits are minimal.
### Cons:
### Cons
- If you don't have a static IP address, you would need to set up a [Dynamic DNS](https://www.cloudflare.com/learning/dns/glossary/dynamic-dns/). [DuckDNS](https://www.duckdns.org/) is a free DDNS provider.
- VPN software needs to be installed and active on both server-side and client-side.
@@ -27,6 +27,10 @@ You may use a VPN service to open an encrypted connection to your Immich instanc
If you are unable to open a port on your router for Wireguard or OpenVPN to your server, [Tailscale](https://tailscale.com/) is a good option. Tailscale mediates a peer-to-peer wireguard tunnel between your server and remote device, even if one or both of them are behind a [NAT firewall](https://en.wikipedia.org/wiki/Network_address_translation).
:::tip Video tutorial
You can learn how to set up Tailscale together with Immich with the [tutorial video](https://www.youtube.com/watch?v=Vt4PDUXB_fg) they created.
:::
### Pros
- Minimal configuration needed on server and client sides.
@@ -44,7 +48,7 @@ A reverse proxy is a service that sits between web servers and clients. A revers
If you're hosting your own reverse proxy, [Nginx](https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/) is a great option. An example configuration for Nginx is provided [here](/docs/administration/reverse-proxy.md).
You'll also need your own certificate to authenticate https connections. If you're making Immich publicly accesible, [Let's Encrypt](https://letsencrypt.org/) can provide a free certificate for your domain and is the recommended option. Alternatively, a [self-signed certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) allows you to encrypt your connection to Immich, but it raises a security warning on the client's browser.
You'll also need your own certificate to authenticate https connections. If you're making Immich publicly accessible, [Let's Encrypt](https://letsencrypt.org/) can provide a free certificate for your domain and is the recommended option. Alternatively, a [self-signed certificate](https://en.wikipedia.org/wiki/Self-signed_certificate) allows you to encrypt your connection to Immich, but it raises a security warning on the client's browser.
A remote reverse proxy like [Cloudflare](https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/) increases security by hiding the server IP address, which makes targeted attacks like [DDoS](https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/) harder.
@@ -4,14 +4,15 @@ To alleviate [performance issues on low-memory systems](/docs/FAQ.mdx#why-is-imm
- Set the URL in Machine Learning Settings on the Admin Settings page to point to the designated ML system, e.g. `http://workstation:3003`.
- Copy the following `docker-compose.yml` to your ML system.
- If using [hardware acceleration](/docs/features/ml-hardware-acceleration), the [hwaccel.ml.yml](https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml) file also needs to be added
- Start the container by running `docker compose up -d`.
:::info
Starting with version v1.93.0 face detection work and face recognize were split. From now on face detection is done in the immich_machine_learning container, but facial recognition is done in the `microservices` worker.
Smart Search and Face Detection will use this feature, but Facial Recognition is handled in the server.
:::
:::note
The [hwaccel.ml.yml](https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml) file also needs to be in the same folder if trying to use [hardware acceleration](/docs/features/ml-hardware-acceleration).
:::danger
When using remote machine learning, the thumbnails are sent to the remote machine learning container. Use this option carefully when running this on a public computer or a paid processing cloud.
:::
```yaml
@@ -37,3 +38,7 @@ volumes:
```
Please note that version mismatches between both hosts may cause instabilities and bugs, so make sure to always perform updates together.
:::caution
As an internal service, the machine learning container has no security measures whatsoever. Please be mindful of where it's deployed and who can access it.
Immich is built with modern deployment practices in mind, and the backend is designed to be able to run multiple instances in parallel. When doing this, the only requirement you need to be aware of is that every instance needs to be connected to the shared infrastructure. That means they should all have access to the same Postgres and Redis instances, and have the same files mounted into the containers.
Scaling can be useful for many reasons. Maybe you have a gaming PC that you want to use for transcoding and thumbnail generation, or perhaps you run a Kubernetes cluster across a handful of powerful servers that you want to make use of.
:::info
If you only have a single machine to run Immich on, scaling to multiple containers is unlikely to provide any benefit. An Immich container will run multiple background tasks at once, and you can increase their number from the admin panel.
:::
The details of how to scale across multiple machines will vary widely between different environments and require some knowledge to set up, and as such this guide gives no specific instructions. In some cases scaling up can be as easy as incrementing the amount of replicas on a Kubernetes deployment, in others it might need you to configure network tunnels or NFS mounts. The details are left as an exercise for the reader ;)
## Workers
By default, each running `immich-server` container comes with multiple internal workers. If you're scaling up only to handle more background tasks, you can choose to disable the worker responsible for the API. See [workers](../administration/jobs-workers.md) for more detail.
## Scaling down
In the same way you can scale up to multiple containers, you can also choose to scale down. All state is stored in Postgres, Redis, and the filesystem so there is no risk in stopping a running immich-server container, for example if you want to use your GPU to play some games. As long as there is an API worker running you will still be able to browse Immich, and jobs will wait to be processed until there is a worker available for them.
@@ -9,7 +9,7 @@ The database is saved to your Immich upload folder in the `database-backup` subd
### Prerequisites
- Borg needs to be installed on your server as well as the remote machine. You can find instructions to install Borg [here](https://borgbackup.readthedocs.io/en/latest/installation.html).
- (Optional) To run this sript as a non-root user, you should [add your username to the docker group](https://docs.docker.com/engine/install/linux-postinstall/).
- (Optional) To run this script as a non-root user, you should [add your username to the docker group](https://docs.docker.com/engine/install/linux-postinstall/).
- To run this script non-interactively, set up [passwordless ssh](https://www.redhat.com/sysadmin/passwordless-ssh) to your remote machine from your server. If you skipped the previous step, make sure this step is done from your root account.
To initialize the borg repository, run the following commands once.
@@ -78,4 +78,4 @@ borg mount "$REMOTE_HOST:$REMOTE_BACKUP_PATH"/immich-borg /tmp/immich-mountpoint
cd /tmp/immich-mountpoint
```
You can find available snapshots in seperate sub-directories at `/tmp/immich-mountpoint`. Restore the files you need, and unmount the Borg repository using `borg umount /tmp/immich-mountpoint`
You can find available snapshots in separate sub-directories at `/tmp/immich-mountpoint`. Restore the files you need, and unmount the Borg repository using `borg umount /tmp/immich-mountpoint`
@@ -56,7 +56,9 @@ Optionally, you can enable hardware acceleration for machine learning and transc
- Populate custom database information if necessary.
- Populate `UPLOAD_LOCATION` with your preferred location for storing backup assets.
- Consider changing `DB_PASSWORD` to something randomly generated
- Consider changing `DB_PASSWORD` to a custom value. Postgres is not publically exposed, so this password is only used for local authentication.
To avoid issues with Docker parsing this value, it is best to use only the characters `A-Za-z0-9`.
- Set your timezone by uncommenting the `TZ=` line.
### Step 3 - Start the containers
@@ -79,6 +81,10 @@ The Compose file './docker-compose.yml' is invalid because:
See the previous paragraph about installing from the official docker repository.
:::
:::info Health check start interval
If you get an error `can't set healthcheck.start_interval as feature require Docker Engine v25 or later`, it helps to comment out the line for `start_interval` in the `database` section of the `docker-compose.yml` file.
:::
:::tip
For more information on how to use the application, please refer to the [Post Installation](/docs/install/post-install.mdx) guide.
:::
@@ -108,7 +114,7 @@ Immich is currently under heavy development, which means you can expect [breakin
| `IMMICH_MEDIA_LOCATION` | Media Location inside the container ⚠️**You probably shouldn't set this**<sup>\*2</sup>⚠️ | `./upload`<sup>\*3</sup> | server | api, microservices |
| `IMMICH_CONFIG_FILE` | Path to config file | | server | api, microservices |
| `NO_COLOR` | Set to `true` to disable color-coded log output | `false` | server, machine learning | |
| `CPU_CORES` | Amount of cores available to the immich server | auto-detected cpu core count | server | |
| `IMMICH_API_METRICS_PORT` | Port for the OTEL metrics | `8081` | server | api |
| `IMMICH_MICROSERVICES_METRICS_PORT` | Port for the OTEL metrics | `8082` | server | microservices |
| `IMMICH_PROCESS_INVALID_IMAGES` | When `true`, generate thumbnails for invalid images | | server | microservices |
| `IMMICH_TRUSTED_PROXIES` | List of comma separated IPs set as trusted proxies | | server | api |
| `IMMICH_IGNORE_MOUNT_CHECK_ERRORS` | See [System Integrity](/docs/administration/system-integrity) | | server | api, microservices |
\*1: `TZ` should be set to a `TZ identifier` from [this list][tz-list]. For example, `TZ="Etc/UTC"`.
`TZ` is used by `exiftool` as a fallback in case the timezone cannot be determined from the image metadata. It is also used for logfile timestamps and cron job execution.
:::
\*2: This path is where the Immich code looks for the files, which is internal to the docker container. Setting it to a path on your host will certainly break things, you should use the `UPLOAD_LOCATION` variable instead.
\*3: With the default `WORKDIR` of `/usr/src/app`, this path will resolve to `/usr/src/app/upload`.
It only need to be set if the Immich deployment method is changing.
## Workers
@@ -75,7 +68,7 @@ Information on the current workers can be found [here](/docs/administration/jobs
@@ -121,7 +114,7 @@ When `DB_URL` is defined, the `DB_HOSTNAME`, `DB_PORT`, `DB_USERNAME`, `DB_PASSW
All `REDIS_` variables must be provided to all Immich workers, including `api` and `microservices`.
`REDIS_URL` must start with `ioredis://` and then include a `base64` encoded JSON string for the configuration.
More info can be found in the upstream [ioredis][redis-api] documentation.
More info can be found in the upstream [ioredis] documentation.
When `REDIS_URL` or `REDIS_SOCKET` are defined, the `REDIS_HOSTNAME`, `REDIS_PORT`, `REDIS_USERNAME`, `REDIS_PASSWORD`, and `REDIS_DBINDEX` variables are ignored.
:::
@@ -155,23 +148,32 @@ Redis (Sentinel) URL example JSON before encoding:
| `MACHINE_LEARNING_MODEL_TTL` | Inactivity time (s) before a model is unloaded (disabled if \<= 0) |`300` | machine learning |
| `MACHINE_LEARNING_MODEL_TTL_POLL_S` | Interval (s) between checks for the model TTL (disabled if \<= 0) |`10` | machine learning |
| `MACHINE_LEARNING_CACHE_FOLDER` | Directory where models are downloaded |`/cache` | machine learning |
| `MACHINE_LEARNING_REQUEST_THREADS`<sup>\*1</sup> | Thread count of the request thread pool (disabled if \<= 0) | number of CPU cores | machine learning |
| `MACHINE_LEARNING_MODEL_INTER_OP_THREADS` | Number of parallel model operations | `1` | machine learning |
| `MACHINE_LEARNING_MODEL_INTRA_OP_THREADS` | Number of threads for each model operation | `2` | machine learning |
| `MACHINE_LEARNING_WORKERS`<sup>\*2</sup> | Number of worker processes to spawn | `1` | machine learning |
| `MACHINE_LEARNING_WORKER_TIMEOUT` | Maximum time (s) of unresponsiveness before a worker is killed | `120` | machine learning |
| `MACHINE_LEARNING_PRELOAD__CLIP` | Name of a CLIP model to be preloaded and kept in cache| | machine learning |
| `MACHINE_LEARNING_PRELOAD__FACIAL_RECOGNITION` | Name of a facial recognition model to be preloaded and kept in cache | | machine learning |
| `MACHINE_LEARNING_MODEL_TTL`| Inactivity time (s) before a model is unloaded (disabled if \<= 0) | `300` | machine learning |
| `MACHINE_LEARNING_MODEL_TTL_POLL_S`| Interval (s) between checks for the model TTL (disabled if \<= 0) | `10` | machine learning |
| `MACHINE_LEARNING_CACHE_FOLDER`| Directory where models are downloaded | `/cache` | machine learning |
| `MACHINE_LEARNING_REQUEST_THREADS`<sup>\*1</sup> | Thread count of the request thread pool (disabled if \<= 0) | number of CPU cores | machine learning |
| `MACHINE_LEARNING_MODEL_INTER_OP_THREADS` | Number of parallel model operations | `1` | machine learning |
| `MACHINE_LEARNING_MODEL_INTRA_OP_THREADS` | Number of threads for each model operation | `2` | machine learning |
| `MACHINE_LEARNING_WORKERS`<sup>\*2</sup> | Number of worker processes to spawn | `1` | machine learning |
| `MACHINE_LEARNING_HTTP_KEEPALIVE_TIMEOUT_S`<sup>\*3</sup> | HTTP Keep-alive time in seconds | `2` | machine learning |
| `MACHINE_LEARNING_WORKER_TIMEOUT` | Maximum time (s) of unresponsiveness before a worker is killed | `120` (`300` if using OpenVINO image) | machine learning |
| `MACHINE_LEARNING_PRELOAD__CLIP` | Name of a CLIP model to be preloaded and kept in cache | | machine learning |
| `MACHINE_LEARNING_PRELOAD__FACIAL_RECOGNITION` | Name of a facial recognition model to be preloaded and kept in cache | | machine learning |
| `MACHINE_LEARNING_DEVICE_IDS`<sup>\*4</sup> | Device IDs to use in multi-GPU environments | `0` | machine learning |
\*1: It is recommended to begin with this parameter when changing the concurrency levels of the machine learning service and then tune the other ones.
\*2: Since each process duplicates models in memory, changing this is not recommended unless you have abundant memory to go around.
\*3: For scenarios like HPA in K8S. https://github.com/immich-app/immich/discussions/12064
\*4: Using multiple GPUs requires `MACHINE_LEARNING_WORKERS` to be set greater than 1. A single device is assigned to each worker in round-robin priority.
:::info
Other machine learning parameters can be tuned from the admin UI.
@@ -216,4 +218,4 @@ to use use a Docker secret for the password in the Redis container.
@@ -23,7 +23,33 @@ Immich requires the command `docker compose` - the similarly named `docker-compo
- **RAM**: Minimum 4GB, recommended 6GB.
- **CPU**: Minimum 2 cores, recommended 4 cores.
- **Storage**: Recommended Unix-compatible filesystem (EXT4, ZFS, APFS, etc.) with support for user/group ownership and permissions.
- This can present an issue for Windows users. See [here](/docs/install/environment-variables#supported-filesystems)
for more details and alternatives.
- This can present an issue for Windows users. See below for details and an alternative setup.
- The generation of thumbnails and transcoded video can increase the size of the photo library by 10-20% on average.
- Network shares are supported for the storage of image and video assets only.
- Network shares are supported for the storage of image and video assets only. It is not recommended to use a network share for your database location due to performance and possible data loss issues.
### Special requirements for Windows users
<details>
<summary>Database storage on Windows systems</summary>
The Immich Postgres database (`DB_DATA_LOCATION`) must be located on a filesystem that supports user/group
ownership and permissions (EXT2/3/4, ZFS, APFS, BTRFS, XFS, etc.). It will not work on any filesystem formatted in NTFS or ex/FAT/32.
It will not work in WSL (Windows Subsystem for Linux) when using a mounted host directory (commonly under `/mnt`).
If this is an issue, you can change the bind mount to a Docker volume instead as follows:
Make the following change to `.env`:
```diff
- DB_DATA_LOCATION=./postgres
+ DB_DATA_LOCATION=pgdata
```
Add the following line to the bottom of `docker-compose.yml`:
This method is experimental and not currently recommended for production use. For production, please refer to installing with [Docker Compose](/docs/install/docker-compose.mdx).
:::
:::note
The install script only supports Linux operating systems and requires Docker to be already installed on the system.
:::
In the shell, from a directory of your choice, run the following command:
@@ -30,6 +30,8 @@ You can organize these as one parent with seven child datasets, for example `mnt
:::info Permissions
The **pgData** dataset must be owned by the user `netdata` (UID 999) for postgres to start. The other datasets must be owned by the user `root` (UID 0) or a group that includes the user `root` (UID 0) for immich to have the necessary permissions.
The **library** dataset must have [ACL mode](https://www.truenas.com/docs/core/coretutorials/storage/pools/permissions/#access-control-lists) set to `Passthrough` if you plan on using a [storage template](/docs/administration/storage-template.mdx) and the dataset is configured for network sharing (its ACL type is set to `SMB/NFSv4`). When the template is applied and files need to be moved from **uploads** to **library**, immich performs `chmod` internally and needs to be allowed to execute the command.
alt="Select Plugins > Compose.Manager > Add New Stack > Label it Immich"
/>
3. Select the cog ⚙️ next to Immich then click "**Edit Stack**"
3. Select the cogwheel ⚙️ next to Immich and click "**Edit Stack**"
4. Click "**Compose File**" and then paste the entire contents of the [Immich Docker Compose](https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml) file into the Unraid editor. Remove any text that may be in the text area by default. Note that Unraid v6.12.10 uses version 24.0.9 of the Docker Engine, which does not support healthcheck `start_interval` as defined in the `database` service of the Docker compose file (version 25 or higher is needed). This parameter defines an initial waiting period before starting health checks, to give the container time to start up. Commenting out the `start_interval` and `start_period` parameters will allow the containers to start up normally. The only downside to this is that the database container will not receive an initial health check until `interval` time has passed.
<details >
@@ -130,7 +130,7 @@ For more information on how to use the application once installed, please refer
## Updating Steps
Updating is extremely easy however it's important to be aware that containers managed via the Docker Compose Manager plugin do not integrate with Unraid's native dockerman ui, the label "_update ready_" will always be present on containers installed via the Docker Compose Manager.
Updating is extremely easy however it's important to be aware that containers managed via the Docker Compose Manager plugin do not integrate with Unraid's native dockerman UI, the label "_update ready_" will always be present on containers installed via the Docker Compose Manager.
@@ -27,3 +27,9 @@ If an asset is in multiple albums, `{{album}}` will be set to the name of the al
:::
Immich also provides a mechanism to migrate between templates so that if the template you set now doesn't work in the future, you can always migrate all the existing files to the new template. The mechanism is run as a job on the Job page.
If you want to store assets in album folders, but you also have assets that do not belong to any album, you can use `{{#if album}}`, `{{else}}` and `{{/if}}` to create a conditional statement. For example, the following template will store assets in album folders if they belong to an album, and in a folder named "Other/Month" if they do not belong to an album:
content:`⚠️ The project is under <strong>very active</strong> development. Expect bugs and changes. Do not use it as <strong>the only way</strong> to store your photos and videos!`,
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.