mirror of
https://github.com/immich-app/immich.git
synced 2025-12-20 17:25:35 +03:00
feat: vectorchord (#18042)
* wip auto-detect available extensions auto-recovery, fix reindexing check use original image for ml * set probes * update image for sql checker update images for gha * cascade * fix new instance * accurate dummy vector * simplify dummy * preexisiting pg docs * handle different db name * maybe fix sql generation * revert refreshfaces sql change * redundant switch * outdated message * update docker compose files * Update docs/docs/administration/postgres-standalone.md Co-authored-by: Daniel Dietzler <36593685+danieldietzler@users.noreply.github.com> * tighten range * avoid always printing "vector reindexing complete" * remove nesting * use new images * add vchord to unit tests * debug e2e image * mention 1.107.2 in startup error * support new vchord versions --------- Co-authored-by: Daniel Dietzler <36593685+danieldietzler@users.noreply.github.com>
This commit is contained in:
@@ -5,7 +5,7 @@ import TabItem from '@theme/TabItem';
|
||||
|
||||
Immich uses Postgres as its search database for both metadata and contextual CLIP search.
|
||||
|
||||
Contextual CLIP search is powered by the [pgvecto.rs](https://github.com/tensorchord/pgvecto.rs) extension, utilizing machine learning models like [CLIP](https://openai.com/research/clip) to provide relevant search results. This allows for freeform searches without requiring specific keywords in the image or video metadata.
|
||||
Contextual CLIP search is powered by the [VectorChord](https://github.com/tensorchord/VectorChord) extension, utilizing machine learning models like [CLIP](https://openai.com/research/clip) to provide relevant search results. This allows for freeform searches without requiring specific keywords in the image or video metadata.
|
||||
|
||||
## Advanced Search Filters
|
||||
|
||||
|
||||
Reference in New Issue
Block a user