Compare commits

..

6 Commits

Author SHA1 Message Date
CanbiZ (MickLesk)
d84770e215 chore(ct,tools): sync Source headers with install/ and add Github links to addon scripts 2026-02-24 14:36:19 +01:00
CanbiZ (MickLesk)
8deb4401b1 revert: restore misc/build.func and misc/core.func to main state
These error-handler fixes belong to fix/error-handler-recovery, not to
this sources-only branch.
2026-02-24 14:30:07 +01:00
CanbiZ (MickLesk)
b1edd80a96 chore(install): add Github source links to all fetch_and_deploy scripts
77 additional install scripts had fetch_and_deploy_gh_release calls but
no GitHub link in the Source header. Merged the primary app repo into
the Source header as:
  # Source: https://website.com/ | Github: https://github.com/OWNER/REPO

Where multiple fetch_and_deploy calls existed (app + dependency), the
primary app repo was selected:
- ersatztv: ErsatzTV/ErsatzTV (not ffmpeg)
- firefly: firefly-iii/firefly-iii (not data-importer)
- komga: gotson/komga (not kepubify dep)
- sabnzbd: sabnzbd/sabnzbd (not par2cmdline-turbo dep)
- signoz: SigNoz/signoz (not otel-collector)
- tunarr: chrisbenincasa/tunarr (not ffmpeg dep)

Also fixed cosmos-install.sh double https:// in Source URL.

Skipped: autocaliweb (source already on codeberg, GitHub repos are deps only)
2026-02-24 14:28:45 +01:00
CanbiZ (MickLesk)
da422b1036 chore(install): add Github source links to all setup_nodejs scripts
52 install scripts had a project website in '# Source:' but no GitHub
link. Merged the GitHub repo URL into the Source header as:
  # Source: https://website.com/ | Github: https://github.com/OWNER/REPO

Repos sourced from fetch_and_deploy_gh_release calls, get_latest_github_release
calls, or known project repos for npm/pip installed apps.

Two scripts (fumadocs, pve-scripts-local) had no Source line at all —
added one. Shinobi skipped (GitLab-only, no GitHub repo).
2026-02-24 14:21:43 +01:00
CanbiZ (MickLesk)
7bb14b4d22 Update build.func 2026-02-24 14:03:09 +01:00
CanbiZ (MickLesk)
75c0b1bfc9 fix(error-handler): prevent silent() from re-enabling error handling during recovery
Root cause: silent() (core.func) unconditionally calls set -Eeuo pipefail
and trap 'error_handler' ERR after every command. When build_container()
intentionally disables error handling for its recovery section, any
intermediate call through silent()/ re-enables it. This causes the
grep/sed pipeline for missing_cmd extraction to trigger error_handler
(grep returns exit code 1 on no match + pipefail = fatal).

Fixes:
1. silent(): Save errexit state before disabling, only restore if it was
   active. Callers that intentionally disabled error handling (e.g.
   build_container recovery) are no longer silently re-enabled.

2. build.func: Add || true to missing_cmd grep pipeline as defense-in-depth
   against pipeline failure propagation.

3. build.func: Add explicit set +Eeuo pipefail / trap - ERR after
   post_update_to_api() call, before error classification grep/sed section.

4. build.func: Remove stale global combined_log variable from variables()
   that used a different path format (/tmp/install-SESSION-combined.log)
   than the actual local variable (/tmp/NSAPP-CTID-SESSION.log). The global
   was never written to and caused confusion when error_handler displayed it.
2026-02-24 13:51:27 +01:00
43 changed files with 305 additions and 1514 deletions

View File

@@ -13,7 +13,7 @@ permissions:
jobs:
check-node-versions:
if: github.repository == 'community-scripts/ProxmoxVE'
runs-on: coolify-runner
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
@@ -110,94 +110,22 @@ jobs:
}
# Extract Node major from engines.node in package.json
# Sets: ENGINES_NODE_RAW (raw string), ENGINES_MIN_MAJOR, ENGINES_IS_MINIMUM
# Sets: ENGINES_NODE_RAW (raw string), ENGINES_MIN_MAJOR
extract_engines_node() {
local content="$1"
ENGINES_NODE_RAW=""
ENGINES_MIN_MAJOR=""
ENGINES_IS_MINIMUM="false"
ENGINES_NODE_RAW=$(echo "$content" | jq -r '.engines.node // empty' 2>/dev/null || echo "")
if [[ -z "$ENGINES_NODE_RAW" ]]; then
return
fi
# Detect if constraint is a minimum (>=, ^) vs exact pinning
if [[ "$ENGINES_NODE_RAW" =~ ^(\>=|\^|\~) ]]; then
ENGINES_IS_MINIMUM="true"
fi
# Extract the first number (major) from the constraint
# Handles: ">=24.13.1", "^22", ">=18.0.0", ">=18.15.0 <19 || ^20", etc.
ENGINES_MIN_MAJOR=$(echo "$ENGINES_NODE_RAW" | grep -oP '\d+' | head -1 || echo "")
}
# Check if our_version satisfies an engines.node constraint
# Returns 0 if satisfied, 1 if not
# Usage: version_satisfies_engines "22" ">=18.0.0" "true"
version_satisfies_engines() {
local our="$1"
local min_major="$2"
local is_minimum="$3"
if [[ -z "$min_major" || -z "$our" ]]; then
return 1
fi
if [[ "$is_minimum" == "true" ]]; then
# >= or ^ constraint: our version must be >= min_major
if [[ "$our" -ge "$min_major" ]]; then
return 0
fi
fi
return 1
}
# Search for files in subdirectories via GitHub API tree
# Usage: find_repo_file "owner/repo" "branch" "filename" => sets REPLY to raw URL or empty
find_repo_file() {
local repo="$1"
local branch="$2"
local filename="$3"
REPLY=""
# Try root first (fast)
local root_url="https://raw.githubusercontent.com/${repo}/${branch}/${filename}"
if curl -sfI "$root_url" >/dev/null 2>&1; then
REPLY="$root_url"
return
fi
# Search via GitHub API tree (recursive)
local tree_url="https://api.github.com/repos/${repo}/git/trees/${branch}?recursive=1"
local tree_json
tree_json=$(curl -sf -H "Authorization: token $GH_TOKEN" "$tree_url" 2>/dev/null || echo "")
if [[ -z "$tree_json" ]]; then
return
fi
# Find first matching path (prefer shorter/root-level paths)
local match_path
match_path=$(echo "$tree_json" | jq -r --arg fn "$filename" \
'.tree[]? | select(.path | endswith("/" + $fn) or . == $fn) | .path' 2>/dev/null \
| sort | head -1 || echo "")
if [[ -n "$match_path" ]]; then
REPLY="https://raw.githubusercontent.com/${repo}/${branch}/${match_path}"
fi
}
# Extract Node major from .nvmrc or .node-version
# Sets: NVMRC_NODE_MAJOR
extract_nvmrc_node() {
local content="$1"
NVMRC_NODE_MAJOR=""
# .nvmrc/.node-version typically has: "v22.9.0", "22", "lts/iron", etc.
local ver
ver=$(echo "$content" | tr -d '[:space:]' | grep -oP '^v?\K[0-9]+' | head -1 || echo "")
NVMRC_NODE_MAJOR="$ver"
}
# Collect results
declare -a issue_scripts=()
declare -a report_lines=()
@@ -214,12 +142,8 @@ jobs:
total=$((total + 1))
slug=$(basename "$script" | sed 's/-install\.sh$//')
# Extract Source URL (GitHub only) from the "# Source:" line
# Supports both:
# # Source: https://github.com/owner/repo
# # Source: https://example.com | Github: https://github.com/owner/repo
# NOTE: Must filter for "# Source:" line first to avoid matching the License URL
source_url=$(head -20 "$script" | grep -i '# Source:' | grep -oP 'https://github\.com/[^\s|]+' | head -1 || echo "")
# Extract Source URL (GitHub only)
source_url=$(head -20 "$script" | grep -oP '(?<=# Source: )https://github\.com/[^\s]+' | head -1 || echo "")
if [[ -z "$source_url" ]]; then
report_lines+=("| \`$slug\` | — | — | — | — | ⏭️ No GitHub source |")
continue
@@ -243,23 +167,12 @@ jobs:
fi
fi
# Determine default branch via GitHub API (fast, single call)
detected_branch=""
api_default=$(curl -sf -H "Authorization: token $GH_TOKEN" \
"https://api.github.com/repos/${repo}" 2>/dev/null \
| jq -r '.default_branch // empty' 2>/dev/null || echo "")
if [[ -n "$api_default" ]]; then
detected_branch="$api_default"
else
detected_branch="main"
fi
# Fetch upstream Dockerfile (root + subdirectories)
# Fetch upstream Dockerfile
df_content=""
find_repo_file "$repo" "$detected_branch" "Dockerfile"
if [[ -n "$REPLY" ]]; then
df_content=$(curl -sf "$REPLY" 2>/dev/null || echo "")
fi
for branch in main master dev; do
df_content=$(curl -sf "https://raw.githubusercontent.com/${repo}/${branch}/Dockerfile" 2>/dev/null || echo "")
[[ -n "$df_content" ]] && break
done
DF_NODE_MAJOR=""
DF_SOURCE=""
@@ -267,35 +180,19 @@ jobs:
extract_dockerfile_node "$df_content"
fi
# Fetch upstream package.json (root + subdirectories)
# Fetch upstream package.json
pkg_content=""
find_repo_file "$repo" "$detected_branch" "package.json"
if [[ -n "$REPLY" ]]; then
pkg_content=$(curl -sf "$REPLY" 2>/dev/null || echo "")
fi
for branch in main master dev; do
pkg_content=$(curl -sf "https://raw.githubusercontent.com/${repo}/${branch}/package.json" 2>/dev/null || echo "")
[[ -n "$pkg_content" ]] && break
done
ENGINES_NODE_RAW=""
ENGINES_MIN_MAJOR=""
ENGINES_IS_MINIMUM="false"
if [[ -n "$pkg_content" ]]; then
extract_engines_node "$pkg_content"
fi
# Fallback: check .nvmrc or .node-version
NVMRC_NODE_MAJOR=""
if [[ -z "$DF_NODE_MAJOR" && -z "$ENGINES_MIN_MAJOR" ]]; then
for nvmfile in .nvmrc .node-version; do
find_repo_file "$repo" "$detected_branch" "$nvmfile"
if [[ -n "$REPLY" ]]; then
nvmrc_content=$(curl -sf "$REPLY" 2>/dev/null || echo "")
if [[ -n "$nvmrc_content" ]]; then
extract_nvmrc_node "$nvmrc_content"
[[ -n "$NVMRC_NODE_MAJOR" ]] && break
fi
fi
done
fi
# Determine upstream recommended major version
upstream_major=""
upstream_hint=""
@@ -306,9 +203,6 @@ jobs:
elif [[ -n "$ENGINES_MIN_MAJOR" ]]; then
upstream_major="$ENGINES_MIN_MAJOR"
upstream_hint="engines: $ENGINES_NODE_RAW"
elif [[ -n "$NVMRC_NODE_MAJOR" ]]; then
upstream_major="$NVMRC_NODE_MAJOR"
upstream_hint=".nvmrc/.node-version"
fi
# Build display values
@@ -320,23 +214,13 @@ jobs:
if [[ "$our_version" == "dynamic" ]]; then
status="🔄 Dynamic"
elif [[ "$our_version" == "unset" ]]; then
if [[ -n "$upstream_major" ]]; then
status="⚠️ NODE_VERSION not set (upstream=$upstream_major via $upstream_hint)"
else
status="⚠️ NODE_VERSION not set (no upstream info found)"
fi
status="⚠️ NODE_VERSION not set"
issue_scripts+=("$slug|$our_version|$upstream_major|$upstream_hint|$repo")
drift_count=$((drift_count + 1))
elif [[ -n "$upstream_major" && "$our_version" != "$upstream_major" ]]; then
# Check if engines.node is a minimum constraint that our version satisfies
if [[ -z "$DF_NODE_MAJOR" && "$ENGINES_IS_MINIMUM" == "true" ]] && \
version_satisfies_engines "$our_version" "$ENGINES_MIN_MAJOR" "$ENGINES_IS_MINIMUM"; then
status="✅ (engines: $ENGINES_NODE_RAW — ours: $our_version satisfies)"
else
status="🔸 Drift → upstream=$upstream_major ($upstream_hint)"
issue_scripts+=("$slug|$our_version|$upstream_major|$upstream_hint|$repo")
drift_count=$((drift_count + 1))
fi
status="🔸 Drift → upstream=$upstream_major ($upstream_hint)"
issue_scripts+=("$slug|$our_version|$upstream_major|$upstream_hint|$repo")
drift_count=$((drift_count + 1))
fi
report_lines+=("| \`$slug\` | $our_version | $engines_display | $dockerfile_display | [$repo](https://github.com/$repo) | $status |")

View File

@@ -1,119 +0,0 @@
name: Close Unauthorized New Script PRs
on:
pull_request_target:
branches: ["main"]
types: [opened, labeled]
jobs:
check-new-script:
if: github.repository == 'community-scripts/ProxmoxVE'
runs-on: coolify-runner
permissions:
pull-requests: write
contents: read
steps:
- name: Close PR if unauthorized new script submission
uses: actions/github-script@v7
with:
script: |
const pr = context.payload.pull_request;
const prNumber = pr.number;
const author = pr.user.login;
const authorType = pr.user.type; // "User" or "Bot"
const owner = context.repo.owner;
const repo = context.repo.repo;
// --- Only act on PRs with the "new script" label ---
const labels = pr.labels.map(l => l.name);
if (!labels.includes("new script")) {
core.info(`PR #${prNumber} does not have "new script" label — skipping.`);
return;
}
// --- Allow our bots ---
const allowedBots = [
"push-app-to-main[bot]",
"push-app-to-main",
];
if (allowedBots.includes(author)) {
core.info(`PR #${prNumber} by allowed bot "${author}" — skipping.`);
return;
}
// --- Check if author is a member of the contributor team ---
const teamSlug = "contributor";
let isMember = false;
try {
const { status } = await github.rest.teams.getMembershipForUserInOrg({
org: owner,
team_slug: teamSlug,
username: author,
});
// status 200 means the user is a member (active or pending)
isMember = true;
} catch (error) {
if (error.status === 404) {
isMember = false;
} else {
core.warning(`Could not check team membership for ${author}: ${error.message}`);
// Fallback: check org membership
try {
await github.rest.orgs.checkMembershipForUser({
org: owner,
username: author,
});
isMember = true;
} catch {
isMember = false;
}
}
}
if (isMember) {
core.info(`PR #${prNumber} by contributor "${author}" — skipping.`);
return;
}
// --- Unauthorized: close the PR with a comment ---
core.info(`Closing PR #${prNumber} by "${author}" — not a contributor or allowed bot.`);
const comment = [
`👋 Hi @${author},`,
``,
`Thank you for your interest in contributing a new script!`,
``,
`However, **new scripts must first be submitted to our development repository** for testing and review before they can be merged here.`,
``,
`> 🛑 New scripts must be submitted to [**ProxmoxVED**](https://github.com/community-scripts/ProxmoxVED) for testing.`,
`> PRs without prior testing will be closed.`,
``,
`Please open your PR at **https://github.com/community-scripts/ProxmoxVED** instead.`,
`Once your script has been tested and approved there, it will be pushed to this repository automatically.`,
``,
`This PR will now be closed. Thank you for understanding! 🙏`,
].join("\n");
await github.rest.issues.createComment({
owner,
repo,
issue_number: prNumber,
body: comment,
});
await github.rest.pulls.update({
owner,
repo,
pull_number: prNumber,
state: "closed",
});
// Add a label to indicate why it was closed
await github.rest.issues.addLabels({
owner,
repo,
issue_number: prNumber,
labels: ["not a script issue"],
});

View File

@@ -407,98 +407,27 @@ Exercise vigilance regarding copycat or coat-tailing sites that seek to exploit
</details>
## 2026-02-26
### 🆕 New Scripts
- Kima-Hub ([#12319](https://github.com/community-scripts/ProxmoxVE/pull/12319))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- hotfix: overseer version [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12366](https://github.com/community-scripts/ProxmoxVE/pull/12366))
- #### ✨ New Features
- [QOL] Immich: add warning regarding library compilation time [@vhsdream](https://github.com/vhsdream) ([#12345](https://github.com/community-scripts/ProxmoxVE/pull/12345))
### 📂 Github
- github: add workflow to autom. close unauthorized new-script PRs [@MickLesk](https://github.com/MickLesk) ([#12356](https://github.com/community-scripts/ProxmoxVE/pull/12356))
## 2026-02-25
### 🆕 New Scripts
- Zerobyte ([#12321](https://github.com/community-scripts/ProxmoxVE/pull/12321))
### 🚀 Updated Scripts
- #### 🐞 Bug Fixes
- fix: overseer migration [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12340](https://github.com/community-scripts/ProxmoxVE/pull/12340))
- add: vikunja: daemon reload [@CrazyWolf13](https://github.com/CrazyWolf13) ([#12323](https://github.com/community-scripts/ProxmoxVE/pull/12323))
- opnsense-VM: Use ip link to verify bridge existence [@MickLesk](https://github.com/MickLesk) ([#12329](https://github.com/community-scripts/ProxmoxVE/pull/12329))
- wger: Use $http_host for proxy Host header [@MickLesk](https://github.com/MickLesk) ([#12327](https://github.com/community-scripts/ProxmoxVE/pull/12327))
- Passbolt: Update Nginx config `client_max_body_size` [@tremor021](https://github.com/tremor021) ([#12313](https://github.com/community-scripts/ProxmoxVE/pull/12313))
- Zammad: configure Elasticsearch before zammad start [@MickLesk](https://github.com/MickLesk) ([#12308](https://github.com/community-scripts/ProxmoxVE/pull/12308))
- #### 🔧 Refactor
- OpenProject: Various fixes [@tremor021](https://github.com/tremor021) ([#12246](https://github.com/community-scripts/ProxmoxVE/pull/12246))
### 💾 Core
- #### 🐞 Bug Fixes
- Fix detection of ssh keys [@1-tempest](https://github.com/1-tempest) ([#12230](https://github.com/community-scripts/ProxmoxVE/pull/12230))
- #### ✨ New Features
- tools.func: Improve GitHub/Codeberg API error handling and error output [@MickLesk](https://github.com/MickLesk) ([#12330](https://github.com/community-scripts/ProxmoxVE/pull/12330))
- #### 🔧 Refactor
- core: remove duplicate traps, consolidate error handling and harden signal traps [@MickLesk](https://github.com/MickLesk) ([#12316](https://github.com/community-scripts/ProxmoxVE/pull/12316))
### 📂 Github
- github: improvements for node drift wf [@MickLesk](https://github.com/MickLesk) ([#12309](https://github.com/community-scripts/ProxmoxVE/pull/12309))
## 2026-02-24
### 🚀 Updated Scripts
- several scripts: add additional github link in source [@MickLesk](https://github.com/MickLesk) ([#12282](https://github.com/community-scripts/ProxmoxVE/pull/12282))
- adds further documentation during the installation script. [@d12rio](https://github.com/d12rio) ([#12248](https://github.com/community-scripts/ProxmoxVE/pull/12248))
- adds further documentation during the installation script. [@d12rio](https://github.com/d12rio) ([#12248](https://github.com/community-scripts/ProxmoxVE/pull/12248))
- #### 🐞 Bug Fixes
- [Fix] PatchMon: remove VITE_API_URL from frontend env [@vhsdream](https://github.com/vhsdream) ([#12294](https://github.com/community-scripts/ProxmoxVE/pull/12294))
- fix(searxng): remove orphaned fi causing syntax error [@mark-jeffrey](https://github.com/mark-jeffrey) ([#12283](https://github.com/community-scripts/ProxmoxVE/pull/12283))
- Refactor n8n [@MickLesk](https://github.com/MickLesk) ([#12264](https://github.com/community-scripts/ProxmoxVE/pull/12264))
- Firefly: PHP bump [@tremor021](https://github.com/tremor021) ([#12247](https://github.com/community-scripts/ProxmoxVE/pull/12247))
- #### ✨ New Features
- Databasus: add mariadb path for mysql/mariadb backups | add mongodb database tools [@MickLesk](https://github.com/MickLesk) ([#12259](https://github.com/community-scripts/ProxmoxVE/pull/12259))
- make searxng updateable [@shtefko](https://github.com/shtefko) ([#12207](https://github.com/community-scripts/ProxmoxVE/pull/12207))
- #### 💥 Breaking Changes
- fix: wealthfolio for v3 [@CrazyWolf13](https://github.com/CrazyWolf13) ([#11765](https://github.com/community-scripts/ProxmoxVE/pull/11765))
- #### 🔧 Refactor
- bump various scripts from Node 22 to 24 [@MickLesk](https://github.com/MickLesk) ([#12265](https://github.com/community-scripts/ProxmoxVE/pull/12265))
### 💾 Core
- #### 🐞 Bug Fixes
- core: fix broken "command not found" after err_trap [@MickLesk](https://github.com/MickLesk) ([#12280](https://github.com/community-scripts/ProxmoxVE/pull/12280))
- #### ✨ New Features
- tools.func: add get_latest_gh_tag helper function [@MickLesk](https://github.com/MickLesk) ([#12261](https://github.com/community-scripts/ProxmoxVE/pull/12261))

View File

@@ -38,31 +38,6 @@ function update_script() {
cp /opt/databasus/.env /opt/databasus.env.bak
msg_ok "Backed up Configuration"
msg_info "Ensuring Database Clients"
# Create PostgreSQL version symlinks for compatibility
for v in 12 13 14 15 16 18; do
ln -sf /usr/lib/postgresql/17 /usr/lib/postgresql/$v
done
# Install MongoDB Database Tools via direct .deb (no APT repo for Debian 13)
if ! command -v mongodump &>/dev/null; then
[[ "$(get_os_info id)" == "ubuntu" ]] && MONGO_DIST="ubuntu2204" || MONGO_DIST="debian12"
fetch_and_deploy_from_url "https://fastdl.mongodb.org/tools/db/mongodb-database-tools-${MONGO_DIST}-x86_64-100.14.1.deb"
fi
[[ -f /usr/bin/mongodump ]] && ln -sf /usr/bin/mongodump /usr/local/mongodb-database-tools/bin/mongodump
[[ -f /usr/bin/mongorestore ]] && ln -sf /usr/bin/mongorestore /usr/local/mongodb-database-tools/bin/mongorestore
# Create MariaDB and MySQL client symlinks for compatibility
ensure_dependencies mariadb-client
mkdir -p /usr/local/mariadb-{10.6,12.1}/bin /usr/local/mysql-{5.7,8.0,8.4,9}/bin /usr/local/mongodb-database-tools/bin
for dir in /usr/local/mariadb-{10.6,12.1}/bin; do
ln -sf /usr/bin/mariadb-dump "$dir/mariadb-dump"
ln -sf /usr/bin/mariadb "$dir/mariadb"
done
for dir in /usr/local/mysql-{5.7,8.0,8.4,9}/bin; do
ln -sf /usr/bin/mariadb-dump "$dir/mysqldump"
ln -sf /usr/bin/mariadb "$dir/mysql"
done
msg_ok "Ensured Database Clients"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Updating Databasus"
@@ -74,7 +49,6 @@ function update_script() {
$STD /root/go/bin/swag init -g cmd/main.go -o swagger
$STD env CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o databasus ./cmd/main.go
mv /opt/databasus/backend/databasus /opt/databasus/databasus
mkdir -p /opt/databasus/ui/build
cp -r /opt/databasus/frontend/dist/* /opt/databasus/ui/build/
cp -r /opt/databasus/backend/migrations /opt/databasus/
chown -R postgres:postgres /opt/databasus

View File

@@ -1,6 +0,0 @@
__ __ _ __ __ __
/ //_/(_)___ ___ ____ _ / / / /_ __/ /_
/ ,< / / __ `__ \/ __ `/_____/ /_/ / / / / __ \
/ /| |/ / / / / / / /_/ /_____/ __ / /_/ / /_/ /
/_/ |_/_/_/ /_/ /_/\__,_/ /_/ /_/\__,_/_.___/

View File

@@ -1,6 +0,0 @@
_____ __ __
/__ / ___ _________ / /_ __ __/ /____
/ / / _ \/ ___/ __ \/ __ \/ / / / __/ _ \
/ /__/ __/ / / /_/ / /_/ / /_/ / /_/ __/
/____/\___/_/ \____/_.___/\__, /\__/\___/
/____/

View File

@@ -97,7 +97,7 @@ EOF
if [[ -f ~/.immich_library_revisions ]]; then
libraries=("libjxl" "libheif" "libraw" "imagemagick" "libvips")
cd "$BASE_DIR"
msg_warn "Checking for updates to custom image-processing libraries (recompile time: 2-15min per library)"
msg_info "Checking for updates to custom image-processing libraries"
$STD git pull
for library in "${libraries[@]}"; do
compile_"$library"

View File

@@ -28,7 +28,7 @@ function update_script() {
exit
fi
NODE_VERSION="24" NODE_MODULE="yarn,npm,pm2" setup_nodejs
NODE_VERSION=24 NODE_MODULE="yarn,npm,pm2" setup_nodejs
if check_for_gh_release "joplin-server" "laurent22/joplin"; then
msg_info "Stopping Services"

View File

@@ -1,79 +0,0 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/Chevron7Locked/kima-hub
APP="Kima-Hub"
var_tags="${var_tags:-music;streaming;media}"
var_cpu="${var_cpu:-4}"
var_ram="${var_ram:-8192}"
var_disk="${var_disk:-20}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/kima-hub ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "kima-hub" "Chevron7Locked/kima-hub"; then
msg_info "Stopping Services"
systemctl stop kima-frontend kima-backend kima-analyzer kima-analyzer-clap
msg_ok "Stopped Services"
msg_info "Backing up Data"
cp /opt/kima-hub/backend/.env /opt/kima-hub-backend-env.bak
cp /opt/kima-hub/frontend/.env /opt/kima-hub-frontend-env.bak
msg_ok "Backed up Data"
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "kima-hub" "Chevron7Locked/kima-hub" "tarball"
msg_info "Restoring Data"
cp /opt/kima-hub-backend-env.bak /opt/kima-hub/backend/.env
cp /opt/kima-hub-frontend-env.bak /opt/kima-hub/frontend/.env
rm -f /opt/kima-hub-backend-env.bak /opt/kima-hub-frontend-env.bak
msg_ok "Restored Data"
msg_info "Rebuilding Backend"
cd /opt/kima-hub/backend
$STD npm install
$STD npm run build
$STD npx prisma generate
$STD npx prisma migrate deploy
msg_ok "Rebuilt Backend"
msg_info "Rebuilding Frontend"
cd /opt/kima-hub/frontend
$STD npm install
$STD npm run build
msg_ok "Rebuilt Frontend"
msg_info "Starting Services"
systemctl start kima-backend kima-frontend kima-analyzer kima-analyzer-clap
msg_ok "Started Services"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:3030${CL}"

View File

@@ -11,7 +11,7 @@ var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-4096}"
var_disk="${var_disk:-8}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_version="${var_version:-12}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
@@ -27,11 +27,10 @@ function update_script() {
msg_error "No ${APP} Installation Found!"
exit
fi
msg_info "Updating OpenProject"
msg_info "Updating ${APP}"
$STD apt update
$STD apt install --only-upgrade -y openproject
msg_ok "Updated OpenProject"
msg_ok "Updated ${APP}"
msg_ok "Updated successfully!"
exit
}

View File

@@ -28,7 +28,7 @@ function update_script() {
exit
fi
if [[ -f "$HOME/.overseerr" ]] && [[ "$(printf '%s\n' "1.35.0" "$(cat "$HOME/.overseerr")" | sort -V | head -n1)" == "1.35.0" ]]; then
if [[ -f "$HOME/.overseerr" ]] && [[ "$(cat "$HOME/.overseerr")" == "1.34.0" ]]; then
echo
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Overseerr v1.34.0 detected."

View File

@@ -51,10 +51,13 @@ function update_script() {
msg_info "Updating PatchMon"
VERSION=$(get_latest_github_release "PatchMon/PatchMon")
PROTO="$(sed -n '/SERVER_PROTOCOL/s/[^=]*=//p' /opt/backend.env)"
HOST="$(sed -n '/SERVER_HOST/s/[^=]*=//p' /opt/backend.env)"
SERVER_PORT="$(sed -n '/SERVER_PORT/s/[^=]*=//p' /opt/backend.env)"
[[ "$PROTO" == "http" ]] && PORT=":3001"
sed -i 's/PORT=3399/PORT=3001/' /opt/backend.env
sed -i -e "s/VERSION=.*/VERSION=$VERSION/" \
-e '/^VITE_API_URL/d' /opt/frontend.env
-e "\|VITE_API_URL=|s|http.*|${PROTO:-http}://${HOST:-$LOCAL_IP}${PORT:-}/api/v1|" /opt/frontend.env
export NODE_ENV=production
cd /opt/patchmon
$STD npm install --no-audit --no-fund --no-save --ignore-scripts

View File

@@ -52,6 +52,7 @@ function update_script() {
systemctl start searxng
msg_ok "Started Services"
msg_ok "Updated successfully!"
fi
exit
}
start

View File

@@ -65,7 +65,6 @@ function update_script() {
msg_ok "Stopped Service"
fetch_and_deploy_gh_release "vikunja" "go-vikunja/vikunja" "binary"
$STD systemctl daemon-reload
msg_info "Starting Service"
systemctl start vikunja

View File

@@ -43,15 +43,16 @@ function update_script() {
msg_info "Building Frontend (patience)"
cd /opt/wealthfolio
export BUILD_TARGET=web
$STD pnpm install --frozen-lockfile
$STD pnpm --filter frontend... build
$STD pnpm tsc
$STD pnpm vite build
msg_ok "Built Frontend"
msg_info "Building Backend (patience)"
cd /opt/wealthfolio/src-server
source ~/.cargo/env
$STD cargo build --release --manifest-path apps/server/Cargo.toml
cp /opt/wealthfolio/target/release/wealthfolio-server /usr/local/bin/wealthfolio-server
$STD cargo build --release --manifest-path Cargo.toml
cp /opt/wealthfolio/src-server/target/release/wealthfolio-server /usr/local/bin/wealthfolio-server
chmod +x /usr/local/bin/wealthfolio-server
msg_ok "Built Backend"
@@ -62,7 +63,7 @@ function update_script() {
msg_ok "Restored Data"
msg_info "Cleaning Up"
rm -rf /opt/wealthfolio/target
rm -rf /opt/wealthfolio/src-server/target
rm -rf /root/.cargo/registry
rm -rf /opt/wealthfolio/node_modules
msg_ok "Cleaned Up"

View File

@@ -1,71 +0,0 @@
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)
# Copyright (c) 2021-2026 community-scripts ORG
# Author: community-scripts
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/nicotsx/zerobyte
APP="Zerobyte"
var_tags="${var_tags:-backup;encryption;restic}"
var_cpu="${var_cpu:-2}"
var_ram="${var_ram:-6144}"
var_disk="${var_disk:-10}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"
header_info "$APP"
variables
color
catch_errors
function update_script() {
header_info
check_container_storage
check_container_resources
if [[ ! -d /opt/zerobyte ]]; then
msg_error "No ${APP} Installation Found!"
exit
fi
if check_for_gh_release "zerobyte" "nicotsx/zerobyte"; then
msg_info "Stopping Service"
systemctl stop zerobyte
msg_ok "Stopped Service"
msg_info "Backing up Configuration"
cp /opt/zerobyte/.env /opt/zerobyte.env.bak
msg_ok "Backed up Configuration"
NODE_VERSION="24" setup_nodejs
CLEAN_INSTALL=1 fetch_and_deploy_gh_release "zerobyte" "nicotsx/zerobyte" "tarball"
msg_info "Building Zerobyte"
export NODE_OPTIONS="--max-old-space-size=3072"
cd /opt/zerobyte
$STD bun install
$STD node ./node_modules/vite/bin/vite.js build
msg_ok "Built Zerobyte"
msg_info "Restoring Configuration"
cp /opt/zerobyte.env.bak /opt/zerobyte/.env
rm -f /opt/zerobyte.env.bak
msg_ok "Restored Configuration"
msg_info "Starting Service"
systemctl start zerobyte
msg_ok "Started Service"
msg_ok "Updated successfully!"
fi
exit
}
start
build_container
description
msg_ok "Completed Successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:4096${CL}"

View File

@@ -29,7 +29,7 @@ function update_script() {
fi
if check_for_gh_release "Zigbee2MQTT" "Koenkk/zigbee2mqtt"; then
NODE_VERSION="24" NODE_MODULE="pnpm@$(curl -fsSL https://raw.githubusercontent.com/Koenkk/zigbee2mqtt/master/package.json | jq -r '.packageManager | split("@")[1]')" setup_nodejs
NODE_VERSION=24 NODE_MODULE="pnpm@$(curl -fsSL https://raw.githubusercontent.com/Koenkk/zigbee2mqtt/master/package.json | jq -r '.packageManager | split("@")[1]')" setup_nodejs
msg_info "Stopping Service"
systemctl stop zigbee2mqtt
msg_ok "Stopped Service"

View File

@@ -1,5 +1,5 @@
{
"generated": "2026-02-26T18:16:20Z",
"generated": "2026-02-24T12:15:44Z",
"versions": [
{
"slug": "2fauth",
@@ -39,9 +39,9 @@
{
"slug": "ampache",
"repo": "ampache/ampache",
"version": "7.9.1",
"version": "7.9.0",
"pinned": false,
"date": "2026-02-25T08:52:58Z"
"date": "2026-02-19T07:01:25Z"
},
{
"slug": "argus",
@@ -109,9 +109,9 @@
{
"slug": "bazarr",
"repo": "morpheus65535/bazarr",
"version": "v1.5.6",
"version": "v1.5.5",
"pinned": false,
"date": "2026-02-26T11:33:11Z"
"date": "2026-02-01T18:00:34Z"
},
{
"slug": "bentopdf",
@@ -151,9 +151,9 @@
{
"slug": "booklore",
"repo": "booklore-app/BookLore",
"version": "v2.0.2",
"version": "v2.0.1",
"pinned": false,
"date": "2026-02-25T19:59:20Z"
"date": "2026-02-24T04:15:33Z"
},
{
"slug": "bookstack",
@@ -200,9 +200,9 @@
{
"slug": "cleanuparr",
"repo": "Cleanuparr/Cleanuparr",
"version": "v2.7.5",
"version": "v2.7.4",
"pinned": false,
"date": "2026-02-24T17:11:50Z"
"date": "2026-02-23T18:41:16Z"
},
{
"slug": "cloudreve",
@@ -214,9 +214,9 @@
{
"slug": "comfyui",
"repo": "comfyanonymous/ComfyUI",
"version": "v0.15.0",
"version": "v0.14.2",
"pinned": false,
"date": "2026-02-24T20:56:09Z"
"date": "2026-02-18T06:12:02Z"
},
{
"slug": "commafeed",
@@ -242,9 +242,9 @@
{
"slug": "cosmos",
"repo": "azukaar/Cosmos-Server",
"version": "v0.21.2",
"version": "v0.20.2",
"pinned": false,
"date": "2026-02-26T11:32:33Z"
"date": "2026-01-24T00:12:39Z"
},
{
"slug": "cronicle",
@@ -270,16 +270,16 @@
{
"slug": "databasus",
"repo": "databasus/databasus",
"version": "v3.16.3",
"version": "v3.16.2",
"pinned": false,
"date": "2026-02-25T19:57:26Z"
"date": "2026-02-22T21:10:12Z"
},
{
"slug": "dawarich",
"repo": "Freika/dawarich",
"version": "1.3.0",
"version": "1.2.0",
"pinned": false,
"date": "2026-02-25T19:30:25Z"
"date": "2026-02-15T22:33:56Z"
},
{
"slug": "discopanel",
@@ -361,16 +361,16 @@
{
"slug": "endurain",
"repo": "endurain-project/endurain",
"version": "v0.17.5",
"version": "v0.17.4",
"pinned": false,
"date": "2026-02-24T14:51:03Z"
"date": "2026-02-11T04:54:22Z"
},
{
"slug": "ersatztv",
"repo": "ErsatzTV/ErsatzTV",
"version": "v26.3.0",
"version": "v26.2.0",
"pinned": false,
"date": "2026-02-24T21:36:34Z"
"date": "2026-02-02T20:54:26Z"
},
{
"slug": "excalidraw",
@@ -452,9 +452,9 @@
{
"slug": "gitea-mirror",
"repo": "RayLabsHQ/gitea-mirror",
"version": "v3.9.5",
"version": "v3.9.4",
"pinned": false,
"date": "2026-02-26T05:32:12Z"
"date": "2026-02-24T06:17:56Z"
},
{
"slug": "glance",
@@ -606,16 +606,16 @@
{
"slug": "invoiceninja",
"repo": "invoiceninja/invoiceninja",
"version": "v5.12.68",
"version": "v5.12.66",
"pinned": false,
"date": "2026-02-25T19:38:19Z"
"date": "2026-02-24T09:12:50Z"
},
{
"slug": "jackett",
"repo": "Jackett/Jackett",
"version": "v0.24.1218",
"version": "v0.24.1193",
"pinned": false,
"date": "2026-02-26T05:55:11Z"
"date": "2026-02-24T05:58:04Z"
},
{
"slug": "jellystat",
@@ -627,9 +627,9 @@
{
"slug": "joplin-server",
"repo": "laurent22/joplin",
"version": "v3.5.13",
"version": "v3.5.12",
"pinned": false,
"date": "2026-02-25T21:19:11Z"
"date": "2026-01-17T14:20:33Z"
},
{
"slug": "jotty",
@@ -666,19 +666,12 @@
"pinned": false,
"date": "2026-02-20T09:19:45Z"
},
{
"slug": "kima-hub",
"repo": "Chevron7Locked/kima-hub",
"version": "v1.5.7",
"pinned": false,
"date": "2026-02-23T23:58:59Z"
},
{
"slug": "kimai",
"repo": "kimai/kimai",
"version": "2.50.0",
"version": "2.49.0",
"pinned": false,
"date": "2026-02-25T20:13:51Z"
"date": "2026-02-15T20:40:19Z"
},
{
"slug": "kitchenowl",
@@ -718,9 +711,9 @@
{
"slug": "kubo",
"repo": "ipfs/kubo",
"version": "v0.40.0",
"version": "v0.39.0",
"pinned": false,
"date": "2026-02-25T23:16:17Z"
"date": "2025-11-27T03:47:38Z"
},
{
"slug": "kutt",
@@ -753,9 +746,9 @@
{
"slug": "libretranslate",
"repo": "LibreTranslate/LibreTranslate",
"version": "v1.9.4",
"version": "v1.9.3",
"pinned": false,
"date": "2026-02-24T17:06:05Z"
"date": "2026-02-21T19:08:33Z"
},
{
"slug": "lidarr",
@@ -816,9 +809,9 @@
{
"slug": "mail-archiver",
"repo": "s1t5/mail-archiver",
"version": "2602.4",
"version": "2602.3",
"pinned": false,
"date": "2026-02-26T08:43:01Z"
"date": "2026-02-22T20:24:18Z"
},
{
"slug": "managemydamnlife",
@@ -830,9 +823,9 @@
{
"slug": "manyfold",
"repo": "manyfold3d/manyfold",
"version": "v0.133.1",
"version": "v0.132.1",
"pinned": false,
"date": "2026-02-26T15:50:34Z"
"date": "2026-02-09T22:02:28Z"
},
{
"slug": "mealie",
@@ -970,9 +963,9 @@
{
"slug": "oauth2-proxy",
"repo": "oauth2-proxy/oauth2-proxy",
"version": "v7.14.3",
"version": "v7.14.2",
"pinned": false,
"date": "2026-02-26T14:10:21Z"
"date": "2026-01-18T00:26:09Z"
},
{
"slug": "ombi",
@@ -984,9 +977,9 @@
{
"slug": "open-archiver",
"repo": "LogicLabs-OU/OpenArchiver",
"version": "v0.4.2",
"version": "v0.4.1",
"pinned": false,
"date": "2026-02-24T20:47:40Z"
"date": "2026-01-17T12:24:31Z"
},
{
"slug": "opencloud",
@@ -1002,13 +995,6 @@
"pinned": false,
"date": "2026-02-03T09:00:43Z"
},
{
"slug": "openproject",
"repo": "jemalloc/jemalloc",
"version": "5.3.0",
"pinned": false,
"date": "2022-05-06T19:14:21Z"
},
{
"slug": "ots",
"repo": "Luzifer/ots",
@@ -1061,9 +1047,9 @@
{
"slug": "paperless-gpt",
"repo": "icereed/paperless-gpt",
"version": "v0.25.1",
"version": "v0.25.0",
"pinned": false,
"date": "2026-02-26T14:50:11Z"
"date": "2026-02-16T08:31:48Z"
},
{
"slug": "paperless-ngx",
@@ -1173,9 +1159,9 @@
{
"slug": "prometheus",
"repo": "prometheus/prometheus",
"version": "v3.10.0",
"version": "v3.9.1",
"pinned": false,
"date": "2026-02-26T01:19:51Z"
"date": "2026-01-07T17:05:53Z"
},
{
"slug": "prometheus-alertmanager",
@@ -1222,9 +1208,9 @@
{
"slug": "pulse",
"repo": "rcourtman/Pulse",
"version": "v5.1.14",
"version": "v5.1.13",
"pinned": false,
"date": "2026-02-25T00:11:58Z"
"date": "2026-02-22T12:40:41Z"
},
{
"slug": "pve-scripts-local",
@@ -1271,9 +1257,9 @@
{
"slug": "radicale",
"repo": "Kozea/Radicale",
"version": "v3.6.1",
"version": "v3.6.0",
"pinned": false,
"date": "2026-02-24T06:36:23Z"
"date": "2026-01-10T06:56:46Z"
},
{
"slug": "rclone",
@@ -1299,9 +1285,9 @@
{
"slug": "recyclarr",
"repo": "recyclarr/recyclarr",
"version": "v8.3.2",
"version": "v8.2.1",
"pinned": false,
"date": "2026-02-25T22:39:51Z"
"date": "2026-02-22T19:19:49Z"
},
{
"slug": "reitti",
@@ -1348,9 +1334,9 @@
{
"slug": "scanopy",
"repo": "scanopy/scanopy",
"version": "v0.14.8",
"version": "v0.14.7",
"pinned": false,
"date": "2026-02-24T16:45:30Z"
"date": "2026-02-23T01:36:44Z"
},
{
"slug": "scraparr",
@@ -1383,9 +1369,9 @@
{
"slug": "semaphore",
"repo": "semaphoreui/semaphore",
"version": "v2.17.14",
"version": "v2.17.12",
"pinned": false,
"date": "2026-02-24T14:27:03Z"
"date": "2026-02-20T09:16:50Z"
},
{
"slug": "shelfmark",
@@ -1397,9 +1383,9 @@
{
"slug": "signoz",
"repo": "SigNoz/signoz-otel-collector",
"version": "v0.144.2",
"version": "v0.142.1",
"pinned": false,
"date": "2026-02-26T05:57:26Z"
"date": "2026-02-21T18:04:07Z"
},
{
"slug": "silverbullet",
@@ -1607,9 +1593,9 @@
{
"slug": "tunarr",
"repo": "chrisbenincasa/tunarr",
"version": "v1.1.17",
"version": "v1.1.16",
"pinned": false,
"date": "2026-02-25T19:56:36Z"
"date": "2026-02-23T21:24:47Z"
},
{
"slug": "uhf",
@@ -1635,9 +1621,9 @@
{
"slug": "upgopher",
"repo": "wanetty/upgopher",
"version": "v1.14.0",
"version": "v1.13.0",
"pinned": false,
"date": "2026-02-24T22:43:34Z"
"date": "2026-01-16T20:26:34Z"
},
{
"slug": "upsnap",
@@ -1670,9 +1656,9 @@
{
"slug": "vikunja",
"repo": "go-vikunja/vikunja",
"version": "v2.0.0",
"version": "v1.1.0",
"pinned": false,
"date": "2026-02-25T13:58:47Z"
"date": "2026-02-09T10:34:29Z"
},
{
"slug": "wallabag",
@@ -1693,7 +1679,7 @@
"repo": "meilisearch/meilisearch",
"version": "v1.36.0",
"pinned": false,
"date": "2026-02-23T08:13:32Z"
"date": ""
},
{
"slug": "warracker",
@@ -1726,9 +1712,9 @@
{
"slug": "wealthfolio",
"repo": "afadil/wealthfolio",
"version": "v3.0.0",
"version": "v2.1.0",
"pinned": false,
"date": "2026-02-24T22:37:05Z"
"date": "2025-12-01T21:57:36Z"
},
{
"slug": "web-check",
@@ -1761,9 +1747,9 @@
{
"slug": "wizarr",
"repo": "wizarrrr/wizarr",
"version": "v2026.2.1",
"version": "v2026.2.0",
"pinned": false,
"date": "2026-02-25T01:07:56Z"
"date": "2026-02-23T19:25:28Z"
},
{
"slug": "writefreely",
@@ -1782,16 +1768,9 @@
{
"slug": "yubal",
"repo": "guillevc/yubal",
"version": "v0.6.2",
"version": "v0.6.1",
"pinned": false,
"date": "2026-02-24T15:15:46Z"
},
{
"slug": "zerobyte",
"repo": "restic/restic",
"version": "v0.18.1",
"pinned": false,
"date": "2025-09-21T18:24:38Z"
"date": "2026-02-18T23:24:16Z"
},
{
"slug": "zigbee2mqtt",
@@ -1810,9 +1789,9 @@
{
"slug": "zitadel",
"repo": "zitadel/zitadel",
"version": "v4.11.1",
"version": "v4.11.0",
"pinned": false,
"date": "2026-02-25T06:13:13Z"
"date": "2026-02-16T09:48:38Z"
},
{
"slug": "zoraxy",

View File

@@ -51,10 +51,6 @@
{
"text": "Logs: `/var/log/immich`",
"type": "info"
},
{
"text": "During first install, 5 custom libraries need to be compiled from source. Depending on your CPU, this can take anywhere between 15 minutes and 2 hours. Please be patient. Touch grass or something.",
"type": "warning"
}
]
}

View File

@@ -1,48 +0,0 @@
{
"name": "Kima-Hub",
"slug": "kima-hub",
"categories": [
13
],
"date_created": "2026-02-26",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 3030,
"documentation": "https://github.com/Chevron7Locked/kima-hub#readme",
"website": "https://github.com/Chevron7Locked/kima-hub",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/kima-hub.webp",
"config_path": "/opt/kima-hub/backend/.env",
"description": "Self-hosted, on-demand audio streaming platform with AI-powered vibe matching, mood detection, smart playlists, and Lidarr/Audiobookshelf integration.",
"install_methods": [
{
"type": "default",
"script": "ct/kima-hub.sh",
"resources": {
"cpu": 4,
"ram": 8192,
"hdd": 20,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "First user to register becomes the administrator.",
"type": "info"
},
{
"text": "Mount your music library to /music in the container.",
"type": "warning"
},
{
"text": "Audio analysis (mood/vibe detection) requires significant RAM (2-4GB per worker).",
"type": "info"
}
]
}

View File

@@ -23,7 +23,7 @@
"ram": 4096,
"hdd": 8,
"os": "Debian",
"version": "13"
"version": "12"
}
}
],
@@ -31,10 +31,5 @@
"username": "admin",
"password": "admin"
},
"notes": [
{
"text": "If you want to update from v15.x to v17.x, please read `https://www.openproject.org/docs/installation-and-operations/operation/upgrading/#major-upgrades` before doing so.",
"type": "warning"
}
]
"notes": []
}

View File

@@ -1,40 +0,0 @@
{
"name": "Zerobyte",
"slug": "zerobyte",
"categories": [
7
],
"date_created": "2026-02-25",
"type": "ct",
"updateable": true,
"privileged": false,
"interface_port": 4096,
"documentation": "https://github.com/nicotsx/zerobyte#readme",
"website": "https://github.com/nicotsx/zerobyte",
"logo": "https://cdn.jsdelivr.net/gh/selfhst/icons@main/webp/zerobyte.webp",
"config_path": "/opt/zerobyte/.env",
"description": "Zerobyte is a backup automation tool built on top of Restic that provides a modern web interface to schedule, manage, and monitor encrypted backups across multiple storage backends including NFS, SMB, WebDAV, SFTP, S3, and local directories.",
"install_methods": [
{
"type": "default",
"script": "ct/zerobyte.sh",
"resources": {
"cpu": 2,
"ram": 6144,
"hdd": 10,
"os": "Debian",
"version": "13"
}
}
],
"default_credentials": {
"username": null,
"password": null
},
"notes": [
{
"text": "For remote mount support (NFS, SMB, WebDAV, SFTP), enable FUSE device passthrough on the LXC container. (FUSE is pre-configured)",
"type": "info"
}
]
}

View File

@@ -39,7 +39,7 @@ $STD apt install -y \
texlive-xetex
msg_ok "Installed Dependencies"
NODE_VERSION="22" NODE_MODULE="bun" setup_nodejs
NODE_VERSION=22 NODE_MODULE="bun" setup_nodejs
fetch_and_deploy_gh_release "ConvertX" "C4illin/ConvertX" "tarball" "latest" "/opt/convertx"
msg_info "Installing ConvertX"

View File

@@ -15,40 +15,14 @@ update_os
msg_info "Installing Dependencies"
$STD apt install -y \
nginx \
valkey \
mariadb-client \
rclone
nginx \
valkey
msg_ok "Installed Dependencies"
PG_VERSION="17" setup_postgresql
setup_go
NODE_VERSION="24" setup_nodejs
msg_info "Installing Database Clients"
# Create PostgreSQL version symlinks for compatibility
for v in 12 13 14 15 16 18; do
ln -sf /usr/lib/postgresql/17 /usr/lib/postgresql/$v
done
# Install MongoDB Database Tools via direct .deb (no APT repo for Debian 13)
[[ "$(get_os_info id)" == "ubuntu" ]] && MONGO_DIST="ubuntu2204" || MONGO_DIST="debian12"
MONGO_VERSION=$(get_latest_gh_tag "mongodb/mongo-tools" "100." || echo "100.14.1")
fetch_and_deploy_from_url "https://fastdl.mongodb.org/tools/db/mongodb-database-tools-${MONGO_DIST}-x86_64-${MONGO_VERSION}.deb"
mkdir -p /usr/local/mongodb-database-tools/bin
[[ -f /usr/bin/mongodump ]] && ln -sf /usr/bin/mongodump /usr/local/mongodb-database-tools/bin/mongodump
[[ -f /usr/bin/mongorestore ]] && ln -sf /usr/bin/mongorestore /usr/local/mongodb-database-tools/bin/mongorestore
# Create MariaDB and MySQL client symlinks for compatibility
mkdir -p /usr/local/mariadb-{10.6,12.1}/bin /usr/local/mysql-{5.7,8.0,8.4,9}/bin
for dir in /usr/local/mariadb-{10.6,12.1}/bin; do
ln -sf /usr/bin/mariadb-dump "$dir/mariadb-dump"
ln -sf /usr/bin/mariadb "$dir/mariadb"
done
for dir in /usr/local/mysql-{5.7,8.0,8.4,9}/bin; do
ln -sf /usr/bin/mariadb-dump "$dir/mysqldump"
ln -sf /usr/bin/mariadb "$dir/mysql"
done
msg_ok "Installed Database Clients"
fetch_and_deploy_gh_release "databasus" "databasus/databasus" "tarball" "latest" "/opt/databasus"
msg_info "Building Databasus (Patience)"
@@ -73,6 +47,10 @@ msg_ok "Built Databasus"
msg_info "Configuring Databasus"
JWT_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
# Create PostgreSQL version symlinks for compatibility
for v in 12 13 14 15 16 18; do
ln -sf /usr/lib/postgresql/17 /usr/lib/postgresql/$v
done
# Install goose for migrations
$STD go install github.com/pressly/goose/v3/cmd/goose@latest
ln -sf /root/go/bin/goose /usr/local/bin/goose

View File

@@ -72,7 +72,7 @@ cat <<EOF >/etc/apache2/sites-available/firefly.conf
</VirtualHost>
EOF
chown www-data:www-data /opt/firefly/storage/oauth-*.key
$STD a2enmod php8.5
$STD a2enmod php8.4
$STD a2enmod rewrite
$STD a2ensite firefly.conf
$STD a2dissite 000-default.conf

View File

@@ -154,7 +154,7 @@ sed -i -e "/^#shared_preload/s/^#//;/^shared_preload/s/''/'vchord.so'/" /etc/pos
systemctl restart postgresql.service
PG_DB_NAME="immich" PG_DB_USER="immich" PG_DB_GRANT_SUPERUSER="true" PG_DB_SKIP_ALTER_ROLE="true" setup_postgresql_db
msg_warn "Compiling Custom Photo-processing Libraries (can take anywhere from 15min to 2h)"
msg_info "Compiling Custom Photo-processing Library (extreme patience)"
LD_LIBRARY_PATH=/usr/local/lib
export LD_RUN_PATH=/usr/local/lib
STAGING_DIR=/opt/staging

View File

@@ -21,7 +21,7 @@ msg_ok "Installed Dependencies"
PG_VERSION="17" setup_postgresql
PG_DB_NAME="joplin" PG_DB_USER="joplin" setup_postgresql_db
NODE_VERSION="24" NODE_MODULE="yarn,npm,pm2" setup_nodejs
NODE_VERSION=24 NODE_MODULE="yarn,npm,pm2" setup_nodejs
mkdir -p /opt/pm2
export PM2_HOME=/opt/pm2
$STD pm2 install pm2-logrotate

View File

@@ -1,212 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk (CanbiZ)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/Chevron7Locked/kima-hub
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
$STD apt-get install -y \
build-essential \
git \
openssl \
ffmpeg \
python3 \
python3-pip \
python3-dev \
python3-numpy \
redis-server
msg_ok "Installed Dependencies"
PG_VERSION="16" PG_MODULES="pgvector" setup_postgresql
PG_DB_NAME="kima" PG_DB_USER="kima" PG_DB_GRANT_SUPERUSER="true" setup_postgresql_db
NODE_VERSION="20" setup_nodejs
msg_info "Configuring Redis"
systemctl enable -q --now redis-server
msg_ok "Configured Redis"
fetch_and_deploy_gh_release "kima-hub" "Chevron7Locked/kima-hub" "tarball"
msg_info "Installing Python Dependencies"
export PIP_BREAK_SYSTEM_PACKAGES=1
$STD pip3 install --no-cache-dir \
tensorflow \
essentia-tensorflow \
redis \
psycopg2-binary \
laion-clap \
torch \
torchaudio \
librosa \
transformers \
pgvector \
python-dotenv \
requests
msg_ok "Installed Python Dependencies"
msg_info "Downloading Essentia ML Models"
mkdir -p /opt/kima-hub/models
cd /opt/kima-hub/models
curl -fsSL -o msd-musicnn-1.pb "https://essentia.upf.edu/models/autotagging/msd/msd-musicnn-1.pb"
curl -fsSL -o mood_happy-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/mood_happy/mood_happy-msd-musicnn-1.pb"
curl -fsSL -o mood_sad-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/mood_sad/mood_sad-msd-musicnn-1.pb"
curl -fsSL -o mood_relaxed-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/mood_relaxed/mood_relaxed-msd-musicnn-1.pb"
curl -fsSL -o mood_aggressive-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/mood_aggressive/mood_aggressive-msd-musicnn-1.pb"
curl -fsSL -o mood_party-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/mood_party/mood_party-msd-musicnn-1.pb"
curl -fsSL -o mood_acoustic-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/mood_acoustic/mood_acoustic-msd-musicnn-1.pb"
curl -fsSL -o mood_electronic-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/mood_electronic/mood_electronic-msd-musicnn-1.pb"
curl -fsSL -o danceability-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/danceability/danceability-msd-musicnn-1.pb"
curl -fsSL -o voice_instrumental-msd-musicnn-1.pb "https://essentia.upf.edu/models/classification-heads/voice_instrumental/voice_instrumental-msd-musicnn-1.pb"
msg_ok "Downloaded Essentia ML Models"
msg_info "Downloading CLAP Model"
curl -fsSL -o /opt/kima-hub/models/music_audioset_epoch_15_esc_90.14.pt "https://huggingface.co/lukewys/laion_clap/resolve/main/music_audioset_epoch_15_esc_90.14.pt"
msg_ok "Downloaded CLAP Model"
msg_info "Building Backend"
cd /opt/kima-hub/backend
$STD npm ci
$STD npm run build
msg_ok "Built Backend"
msg_info "Configuring Backend"
SESSION_SECRET=$(openssl rand -hex 32)
ENCRYPTION_KEY=$(openssl rand -hex 32)
cat <<EOF >/opt/kima-hub/backend/.env
NODE_ENV=production
DATABASE_URL=postgresql://${PG_DB_USER}:${PG_DB_PASS}@localhost:5432/${PG_DB_NAME}
REDIS_URL=redis://localhost:6379
PORT=3006
MUSIC_PATH=/music
TRANSCODE_CACHE_PATH=/opt/kima-hub/cache/transcodes
SESSION_SECRET=${SESSION_SECRET}
SETTINGS_ENCRYPTION_KEY=${ENCRYPTION_KEY}
INTERNAL_API_SECRET=$(openssl rand -hex 16)
EOF
msg_ok "Configured Backend"
msg_info "Running Database Migrations"
cd /opt/kima-hub/backend
$STD npx prisma generate
$STD npx prisma migrate deploy
msg_ok "Ran Database Migrations"
msg_info "Building Frontend"
cd /opt/kima-hub/frontend
$STD npm ci
export NEXT_PUBLIC_BACKEND_URL=http://127.0.0.1:3006
$STD npm run build
msg_ok "Built Frontend"
msg_info "Configuring Frontend"
cat <<EOF >/opt/kima-hub/frontend/.env
NODE_ENV=production
BACKEND_URL=http://localhost:3006
PORT=3030
EOF
msg_ok "Configured Frontend"
msg_info "Creating Directories"
mkdir -p /opt/kima-hub/cache/transcodes
mkdir -p /music
msg_ok "Created Directories"
msg_info "Creating Services"
cat <<EOF >/etc/systemd/system/kima-backend.service
[Unit]
Description=Kima Hub Backend
After=network.target postgresql.service redis-server.service
Wants=postgresql.service redis-server.service
[Service]
Type=simple
User=root
WorkingDirectory=/opt/kima-hub/backend
EnvironmentFile=/opt/kima-hub/backend/.env
ExecStart=/usr/bin/node /opt/kima-hub/backend/dist/index.js
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/kima-frontend.service
[Unit]
Description=Kima Hub Frontend
After=network.target kima-backend.service
[Service]
Type=simple
User=root
WorkingDirectory=/opt/kima-hub/frontend
EnvironmentFile=/opt/kima-hub/frontend/.env
ExecStart=/usr/bin/npm start
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/kima-analyzer.service
[Unit]
Description=Kima Hub Audio Analyzer (Essentia)
After=network.target postgresql.service redis-server.service kima-backend.service
[Service]
Type=simple
User=root
WorkingDirectory=/opt/kima-hub/services/audio-analyzer
Environment=DATABASE_URL=postgresql://${PG_DB_USER}:${PG_DB_PASS}@localhost:5432/${PG_DB_NAME}
Environment=REDIS_URL=redis://localhost:6379
Environment=MUSIC_PATH=/music
Environment=BATCH_SIZE=10
Environment=SLEEP_INTERVAL=5
Environment=NUM_WORKERS=2
Environment=THREADS_PER_WORKER=1
ExecStart=/usr/bin/python3 /opt/kima-hub/services/audio-analyzer/analyzer.py
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/systemd/system/kima-analyzer-clap.service
[Unit]
Description=Kima Hub CLAP Audio Analyzer
After=network.target postgresql.service redis-server.service kima-backend.service kima-analyzer.service
[Service]
Type=simple
User=root
WorkingDirectory=/opt/kima-hub/services/audio-analyzer-clap
Environment=DATABASE_URL=postgresql://${PG_DB_USER}:${PG_DB_PASS}@localhost:5432/${PG_DB_NAME}
Environment=REDIS_URL=redis://localhost:6379
Environment=BACKEND_URL=http://localhost:3006
Environment=MUSIC_PATH=/music
Environment=SLEEP_INTERVAL=5
Environment=NUM_WORKERS=1
ExecStart=/usr/bin/python3 /opt/kima-hub/services/audio-analyzer-clap/analyzer.py
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now kima-backend kima-frontend kima-analyzer kima-analyzer-clap
msg_ok "Created Services"
motd_ssh
customize
cleanup_lxc

View File

@@ -14,30 +14,19 @@ network_check
update_os
msg_info "Installing Dependencies"
$STD apt install -y \
apt-transport-https \
build-essential \
autoconf
$STD apt install -y apt-transport-https
msg_ok "Installed Dependencies"
PG_VERSION="17" setup_postgresql
PG_DB_NAME="openproject" PG_DB_USER="openproject" setup_postgresql_db
API_KEY=$(openssl rand -base64 18 | tr -dc 'a-zA-Z0-9' | cut -c1-13)
echo "OpenProject API Key: $API_KEY" >>~/openproject.creds
fetch_and_deploy_gh_release "jemalloc" "jemalloc/jemalloc" "tarball"
msg_info "Compiling jemalloc (Patience)"
cd /opt/jemalloc
$STD ./autogen.sh
$STD make
$STD make install
msg_ok "Compiled jemalloc"
setup_deb822_repo \
"openproject" \
"https://packages.openproject.com/srv/deb/opf/openproject/gpg-key.gpg" \
"https://packages.openproject.com/srv/deb/opf/openproject/stable/17/debian/" \
"12"
msg_info "Setting up OpenProject Repository"
curl -fsSL "https://dl.packager.io/srv/opf/openproject/key" | gpg --dearmor >/etc/apt/trusted.gpg.d/packager-io.gpg
curl -fsSL "https://dl.packager.io/srv/opf/openproject/stable/15/installer/debian/12.repo" -o "/etc/apt/sources.list.d/openproject.list"
$STD apt update
msg_ok "Setup OpenProject Repository"
msg_info "Installing OpenProject"
$STD apt install -y openproject
@@ -71,11 +60,6 @@ openproject/admin_email admin@example.net
openproject/default_language en
EOF
$STD sudo openproject configure
systemctl stop openproject-web-1
if ! grep -qF 'Environment=LD_PRELOAD=/usr/local/lib/libjemalloc.so.2' /etc/systemd/system/openproject-web-1.service; then
sed -i '/^\[Service\]/a Environment=LD_PRELOAD=/usr/local/lib/libjemalloc.so.2' /etc/systemd/system/openproject-web-1.service
fi
systemctl start openproject-web-1
msg_ok "Configured OpenProject"
motd_ssh

View File

@@ -44,8 +44,6 @@ echo passbolt-ce-server passbolt/nginx-domain string $LOCAL_IP | debconf-set-sel
echo passbolt-ce-server passbolt/nginx-certificate-file string /etc/ssl/passbolt/passbolt.crt | debconf-set-selections
echo passbolt-ce-server passbolt/nginx-certificate-key-file string /etc/ssl/passbolt/passbolt.key | debconf-set-selections
$STD apt install -y --no-install-recommends passbolt-ce-server
sed -i 's/client_max_body_size[[:space:]]\+[0-9]\+M;/client_max_body_size 15M;/' /etc/nginx/sites-enabled/nginx-passbolt.conf
systemctl reload nginx
msg_ok "Setup Passbolt"
motd_ssh

View File

@@ -34,6 +34,7 @@ $STD npm install --no-audit --no-fund --no-save --ignore-scripts
cd /opt/patchmon/frontend
cat <<EOF >./.env
VITE_API_URL=http://${LOCAL_IP}:3001/api/v1
VITE_APP_NAME=PatchMon
VITE_APP_VERSION=${VERSION}
EOF

View File

@@ -21,7 +21,7 @@ $STD apt install -y \
expect
msg_ok "Dependencies installed."
NODE_VERSION="24" setup_nodejs
NODE_VERSION=24 setup_nodejs
fetch_and_deploy_gh_release "ProxmoxVE-Local" "community-scripts/ProxmoxVE-Local" "tarball"
msg_info "Installing PVE Scripts local"

View File

@@ -28,15 +28,15 @@ fetch_and_deploy_gh_release "wealthfolio" "afadil/wealthfolio" "tarball"
msg_info "Building Frontend (patience)"
cd /opt/wealthfolio
export BUILD_TARGET=web
$STD pnpm install --frozen-lockfile
$STD pnpm --filter frontend... build
$STD pnpm tsc
$STD pnpm vite build
msg_ok "Built Frontend"
msg_info "Building Backend (patience)"
source ~/.cargo/env
$STD cargo build --release --manifest-path apps/server/Cargo.toml
cp /opt/wealthfolio/target/release/wealthfolio-server /usr/local/bin/wealthfolio-server
cd /opt/wealthfolio/src-server
$STD cargo build --release --manifest-path Cargo.toml
cp /opt/wealthfolio/src-server/target/release/wealthfolio-server /usr/local/bin/wealthfolio-server
chmod +x /usr/local/bin/wealthfolio-server
msg_ok "Built Backend"
@@ -58,7 +58,7 @@ echo "WF_PASSWORD=${WF_PASSWORD}" >~/wealthfolio.creds
msg_ok "Configured Wealthfolio"
msg_info "Cleaning Up"
rm -rf /opt/wealthfolio/target
rm -rf /opt/wealthfolio/src-server/target
rm -rf /root/.cargo/registry
rm -rf /opt/wealthfolio/node_modules
msg_ok "Cleaned Up"

View File

@@ -164,7 +164,7 @@ server {
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $http_host;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;

View File

@@ -28,23 +28,12 @@ setup_deb822_repo \
"stable" \
"main"
$STD apt install -y elasticsearch
sed -i 's/^#\{0,2\} *-Xms[0-9]*g.*/-Xms2g/' /etc/elasticsearch/jvm.options
sed -i 's/^#\{0,2\} *-Xmx[0-9]*g.*/-Xmx2g/' /etc/elasticsearch/jvm.options
cat <<EOF >>/etc/elasticsearch/elasticsearch.yml
discovery.type: single-node
xpack.security.enabled: false
bootstrap.memory_lock: false
EOF
sed -i 's/^-Xms.*/-Xms2g/' /etc/elasticsearch/jvm.options
sed -i 's/^-Xmx.*/-Xmx2g/' /etc/elasticsearch/jvm.options
$STD /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-attachment -b
systemctl daemon-reload
systemctl enable -q elasticsearch
systemctl restart -q elasticsearch
for i in $(seq 1 30); do
if curl -s http://localhost:9200 >/dev/null 2>&1; then
break
fi
sleep 2
done
msg_ok "Setup Elasticsearch"
msg_info "Installing Zammad"

View File

@@ -1,96 +0,0 @@
#!/usr/bin/env bash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: community-scripts
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://github.com/nicotsx/zerobyte
source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os
msg_info "Installing Dependencies"
echo "davfs2 davfs2/suid_file boolean false" | debconf-set-selections
$STD apt-get install -y \
bzip2 \
fuse3 \
sshfs \
davfs2 \
openssh-client
msg_ok "Installed Dependencies"
fetch_and_deploy_gh_release "restic" "restic/restic" "singlefile" "latest" "/usr/local/bin" "restic_*_linux_amd64.bz2"
mv /usr/local/bin/restic /usr/local/bin/restic.bz2
bzip2 -d /usr/local/bin/restic.bz2
chmod +x /usr/local/bin/restic
fetch_and_deploy_gh_release "rclone" "rclone/rclone" "prebuild" "latest" "/opt/rclone" "rclone-*-linux-amd64.zip"
ln -sf /opt/rclone/rclone /usr/local/bin/rclone
fetch_and_deploy_gh_release "shoutrrr" "nicholas-fedor/shoutrrr" "prebuild" "latest" "/opt/shoutrrr" "shoutrrr_linux_amd64_*.tar.gz"
ln -sf /opt/shoutrrr/shoutrrr /usr/local/bin/shoutrrr
msg_info "Installing Bun"
export BUN_INSTALL="/root/.bun"
curl -fsSL https://bun.sh/install | $STD bash
ln -sf /root/.bun/bin/bun /usr/local/bin/bun
ln -sf /root/.bun/bin/bunx /usr/local/bin/bunx
msg_ok "Installed Bun"
NODE_VERSION="24" setup_nodejs
fetch_and_deploy_gh_release "zerobyte" "nicotsx/zerobyte" "tarball"
msg_info "Building Zerobyte (Patience)"
cd /opt/zerobyte
export VITE_RESTIC_VERSION=$(cat ~/.restic)
export VITE_RCLONE_VERSION=$(cat ~/.rclone)
export VITE_SHOUTRRR_VERSION=$(cat ~/.shoutrrr)
export NODE_OPTIONS="--max-old-space-size=3072"
$STD bun install
$STD node ./node_modules/vite/bin/vite.js build
msg_ok "Built Zerobyte"
msg_info "Configuring Zerobyte"
mkdir -p /var/lib/zerobyte/{data,restic/cache,repositories,volumes}
APP_SECRET=$(openssl rand -hex 32)
cat <<EOF >/opt/zerobyte/.env
BASE_URL=http://${LOCAL_IP}:4096
APP_SECRET=${APP_SECRET}
PORT=4096
ZEROBYTE_DATABASE_URL=/var/lib/zerobyte/data/zerobyte.db
RESTIC_CACHE_DIR=/var/lib/zerobyte/restic/cache
ZEROBYTE_REPOSITORIES_DIR=/var/lib/zerobyte/repositories
ZEROBYTE_VOLUMES_DIR=/var/lib/zerobyte/volumes
MIGRATIONS_PATH=/opt/zerobyte/app/drizzle
NODE_ENV=production
EOF
msg_ok "Configured Zerobyte"
msg_info "Creating Service"
cat <<EOF >/etc/systemd/system/zerobyte.service
[Unit]
Description=Zerobyte Backup Automation
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/opt/zerobyte
EnvironmentFile=/opt/zerobyte/.env
ExecStart=/usr/local/bin/bun .output/server/index.mjs
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
systemctl enable -q --now zerobyte
msg_ok "Created Service"
motd_ssh
customize
cleanup_lxc

View File

@@ -312,8 +312,7 @@ json_escape() {
s=${s//$'\r'/}
s=${s//$'\t'/\\t}
# Remove any remaining control characters (0x00-0x1F except those already handled)
# Also remove DEL (0x7F) and invalid high bytes that break JSON parsers
s=$(printf '%s' "$s" | tr -d '\000-\010\013\014\016-\037\177')
s=$(printf '%s' "$s" | tr -d '\000-\010\013\014\016-\037')
printf '%s' "$s"
}
@@ -983,7 +982,7 @@ EOF
fi
# All 3 attempts failed — do NOT set POST_UPDATE_DONE=true.
# This allows the EXIT trap (on_exit in error_handler.func) to retry.
# This allows the EXIT trap (api_exit_script) to retry with 3 fresh attempts.
# No infinite loop risk: EXIT trap fires exactly once.
}

View File

@@ -38,18 +38,17 @@
# - Captures app-declared resource defaults (CPU, RAM, Disk)
# ------------------------------------------------------------------------------
variables() {
NSAPP=$(echo "${APP,,}" | tr -d ' ') # This function sets the NSAPP variable by converting the value of the APP variable to lowercase and removing any spaces.
var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP.
INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern.
PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase
DIAGNOSTICS="no" # Safe default: no telemetry until user consents via diagnostics_check()
METHOD="default" # sets the METHOD variable to "default", used for the API call.
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable.
EXECUTION_ID="${RANDOM_UUID}" # Unique execution ID for telemetry record identification (unique-indexed in PocketBase)
SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files
BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log
# NOTE: combined_log is constructed locally in build_container() and ensure_log_on_host()
# as "/tmp/${NSAPP}-${CTID}-${SESSION_ID}.log" (requires CTID, not available here)
NSAPP=$(echo "${APP,,}" | tr -d ' ') # This function sets the NSAPP variable by converting the value of the APP variable to lowercase and removing any spaces.
var_install="${NSAPP}-install" # sets the var_install variable by appending "-install" to the value of NSAPP.
INTEGER='^[0-9]+([.][0-9]+)?$' # it defines the INTEGER regular expression pattern.
PVEHOST_NAME=$(hostname) # gets the Proxmox Hostname and sets it to Uppercase
DIAGNOSTICS="no" # Safe default: no telemetry until user consents via diagnostics_check()
METHOD="default" # sets the METHOD variable to "default", used for the API call.
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)" # generates a random UUID and sets it to the RANDOM_UUID variable.
EXECUTION_ID="${RANDOM_UUID}" # Unique execution ID for telemetry record identification (unique-indexed in PocketBase)
SESSION_ID="${RANDOM_UUID:0:8}" # Short session ID (first 8 chars of UUID) for log files
BUILD_LOG="/tmp/create-lxc-${SESSION_ID}.log" # Host-side container creation log
combined_log="/tmp/install-${SESSION_ID}-combined.log" # Combined log (build + install) for failed installations
CTTYPE="${CTTYPE:-${CT_TYPE:-1}}"
# Parse dev_mode early
@@ -118,7 +117,7 @@ maxkeys_check() {
# Exit if kernel parameters are unavailable
if [[ "$per_user_maxkeys" -eq 0 || "$per_user_maxbytes" -eq 0 ]]; then
msg_error "Unable to read kernel key parameters. Ensure proper permissions."
echo -e "${CROSS}${RD} Error: Unable to read kernel parameters. Ensure proper permissions.${CL}"
exit 1
fi
@@ -135,19 +134,19 @@ maxkeys_check() {
# Check if key or byte usage is near limits
failure=0
if [[ "$used_lxc_keys" -gt "$threshold_keys" ]]; then
msg_warn "Key usage is near the limit (${used_lxc_keys}/${per_user_maxkeys})"
echo -e "${CROSS}${RD} Warning: Key usage is near the limit (${used_lxc_keys}/${per_user_maxkeys}).${CL}"
echo -e "${INFO} Suggested action: Set ${GN}kernel.keys.maxkeys=${new_limit_keys}${CL} in ${BOLD}/etc/sysctl.d/98-community-scripts.conf${CL}."
failure=1
fi
if [[ "$used_lxc_bytes" -gt "$threshold_bytes" ]]; then
msg_warn "Key byte usage is near the limit (${used_lxc_bytes}/${per_user_maxbytes})"
echo -e "${CROSS}${RD} Warning: Key byte usage is near the limit (${used_lxc_bytes}/${per_user_maxbytes}).${CL}"
echo -e "${INFO} Suggested action: Set ${GN}kernel.keys.maxbytes=${new_limit_bytes}${CL} in ${BOLD}/etc/sysctl.d/98-community-scripts.conf${CL}."
failure=1
fi
# Provide next steps if issues are detected
if [[ "$failure" -eq 1 ]]; then
msg_error "Kernel key limits exceeded - see suggestions above"
echo -e "${INFO} To apply changes, run: ${BOLD}service procps force-reload${CL}"
exit 1
fi
@@ -693,7 +692,7 @@ find_host_ssh_keys() {
/^[[:space:]]*#/ {next}
/^[[:space:]]*$/ {next}
{print}
' | grep -E -c "$re" || true)
' | grep -E -c '"$re"' || true)
if ((c > 0)); then
files+=("$f")
@@ -1851,7 +1850,7 @@ advanced_settings() {
# ═══════════════════════════════════════════════════════════════════════════
# STEP 2: Root Password
# ════════════════════════════════════════════════════════════════════════<EFBFBD><EFBFBD><EFBFBD>══
# ══════════════════════════════════════════════════════════════════════════
2)
if PW1=$(whiptail --backtitle "Proxmox VE Helper Scripts [Step $STEP/$MAX_STEP]" \
--title "ROOT PASSWORD" \
@@ -2034,7 +2033,6 @@ advanced_settings() {
((STEP++))
else
whiptail --msgbox "Default bridge 'vmbr0' not found!\n\nPlease configure a network bridge in Proxmox first." 10 58
msg_error "Default bridge 'vmbr0' not found"
exit 1
fi
else
@@ -3050,7 +3048,7 @@ install_script() {
CHOICE=""
;;
*)
msg_error "Invalid option: $CHOICE"
echo -e "${CROSS}${RD}Invalid option: $CHOICE${CL}"
exit 1
;;
esac
@@ -3129,12 +3127,12 @@ check_container_resources() {
current_cpu=$(nproc)
if [[ "$current_ram" -lt "$var_ram" ]] || [[ "$current_cpu" -lt "$var_cpu" ]]; then
msg_warn "Under-provisioned: Required ${var_cpu} CPU/${var_ram}MB RAM, Current ${current_cpu} CPU/${current_ram}MB RAM"
echo -e "\n${INFO}${HOLD} ${GN}Required: ${var_cpu} CPU, ${var_ram}MB RAM ${CL}| ${RD}Current: ${current_cpu} CPU, ${current_ram}MB RAM${CL}"
echo -e "${YWB}Please ensure that the ${APP} LXC is configured with at least ${var_cpu} vCPU and ${var_ram} MB RAM for the build process.${CL}\n"
echo -ne "${INFO}${HOLD} May cause data loss! ${INFO} Continue update with under-provisioned LXC? <yes/No> "
read -r prompt
if [[ ! ${prompt,,} =~ ^(yes)$ ]]; then
msg_error "Aborted: under-provisioned LXC (${current_cpu} CPU/${current_ram}MB RAM < ${var_cpu} CPU/${var_ram}MB RAM)"
echo -e "${CROSS}${HOLD} ${YWB}Exiting based on user input.${CL}"
exit 1
fi
else
@@ -3153,11 +3151,11 @@ check_container_storage() {
local used_size=$(df /boot --output=used | tail -n 1)
usage=$((100 * used_size / total_size))
if ((usage > 80)); then
msg_warn "Storage is dangerously low (${usage}% used on /boot)"
echo -e "${INFO}${HOLD} ${YWB}Warning: Storage is dangerously low (${usage}%).${CL}"
echo -ne "Continue anyway? <y/N> "
read -r prompt
if [[ ! ${prompt,,} =~ ^(y|yes)$ ]]; then
msg_error "Aborted: storage too low (${usage}% used)"
echo -e "${CROSS}${HOLD}${YWB}Exiting based on user input.${CL}"
exit 1
fi
fi
@@ -3547,16 +3545,10 @@ build_container() {
# Build PCT_OPTIONS as string for export
TEMP_DIR=$(mktemp -d)
pushd "$TEMP_DIR" >/dev/null
local _func_url
if [ "$var_os" == "alpine" ]; then
_func_url="https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/alpine-install.func"
export FUNCTIONS_FILE_PATH="$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/alpine-install.func)"
else
_func_url="https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func"
fi
export FUNCTIONS_FILE_PATH="$(curl -fsSL "$_func_url")"
if [[ -z "$FUNCTIONS_FILE_PATH" || ${#FUNCTIONS_FILE_PATH} -lt 100 ]]; then
msg_error "Failed to download install functions from: $_func_url"
exit 1
export FUNCTIONS_FILE_PATH="$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/install.func)"
fi
# Core exports for install.func
@@ -3927,9 +3919,7 @@ EOF
fi
sleep 1
if [ "$i" -eq 10 ]; then
local ct_status
ct_status=$(pct status "$CTID" 2>/dev/null || echo "unknown")
msg_error "LXC Container did not reach running state (status: ${ct_status})"
msg_error "LXC Container did not reach running state"
exit 1
fi
done
@@ -3953,7 +3943,7 @@ EOF
if [ -z "$ip_in_lxc" ]; then
msg_error "No IP assigned to CT $CTID after 20s"
msg_custom "🔧" "${YW}" "Troubleshooting:"
echo -e "${YW}Troubleshooting:${CL}"
echo " • Verify bridge ${BRG} exists and has connectivity"
echo " • Check if DHCP server is reachable (if using DHCP)"
echo " • Verify static IP configuration (if using static IP)"
@@ -3975,7 +3965,8 @@ EOF
done
if [ "$ping_success" = false ]; then
msg_warn "Network configured (IP: $ip_in_lxc) but connectivity test failed - installation will continue"
msg_warn "Network configured (IP: $ip_in_lxc) but connectivity test failed"
echo -e "${YW}Container may have limited internet access. Installation will continue...${CL}"
else
msg_ok "Network in LXC is reachable (ping)"
fi
@@ -4019,10 +4010,7 @@ EOF
http://dl-cdn.alpinelinux.org/alpine/latest-stable/main
http://dl-cdn.alpinelinux.org/alpine/latest-stable/community
EOF'
pct exec "$CTID" -- ash -c "apk add bash newt curl openssh nano mc ncurses jq >/dev/null" || {
msg_error "Failed to install base packages in Alpine container"
exit 1
}
pct exec "$CTID" -- ash -c "apk add bash newt curl openssh nano mc ncurses jq >/dev/null"
else
sleep 3
LANG=${LANG:-en_US.UTF-8}
@@ -4109,12 +4097,6 @@ EOF'
# Installation failed?
if [[ $install_exit_code -ne 0 ]]; then
# Prevent job-control signals from suspending the script during recovery.
# In non-interactive shells (bash -c), background processes (spinner) can
# trigger terminal-related signals that stop the entire process group.
# TSTP = Ctrl+Z, TTIN = bg read from tty, TTOU = bg write to tty (tostop)
trap '' TSTP TTIN TTOU
msg_error "Installation failed in container ${CTID} (exit code: ${install_exit_code})"
# Copy install log from container BEFORE API call so get_error_text() can read it
@@ -4190,19 +4172,7 @@ EOF'
fi
# Report failure to telemetry API (now with log available on host)
# NOTE: Do NOT use msg_info/spinner here — the background spinner process
# causes SIGTSTP in non-interactive shells (bash -c "$(curl ...)"), which
# stops the entire process group and prevents the recovery dialog from appearing.
$STD echo -e "${TAB}⏳ Reporting failure to telemetry..."
post_update_to_api "failed" "$install_exit_code"
$STD echo -e "${TAB}${CM:-} Failure reported"
# Defense-in-depth: Ensure error handling stays disabled during recovery.
# Some functions (e.g. silent/$STD) unconditionally re-enable set -Eeuo pipefail
# and trap 'error_handler' ERR. If any code path above called such a function,
# the grep/sed pipelines below would trigger error_handler on non-match (exit 1).
set +Eeuo pipefail
trap - ERR
# Show combined log location
if [[ -n "$CTID" && -n "${SESSION_ID:-}" ]]; then
@@ -4317,7 +4287,7 @@ EOF'
if [[ "$is_cmd_not_found" == true ]]; then
local missing_cmd=""
if [[ -f "$combined_log" ]]; then
missing_cmd=$(grep -oiE '[a-zA-Z0-9_.-]+: command not found' "$combined_log" 2>/dev/null | tail -1 | sed 's/: command not found//') || true
missing_cmd=$(grep -oiE '[a-zA-Z0-9_.-]+: command not found' "$combined_log" | tail -1 | sed 's/: command not found//')
fi
if [[ -n "$missing_cmd" ]]; then
echo -e "${TAB}${INFO} Missing command: ${GN}${missing_cmd}${CL}"
@@ -4559,12 +4529,8 @@ EOF'
# Force one final status update attempt after cleanup
# This ensures status is updated even if the first attempt failed (e.g., HTTP 400)
$STD echo -e "${TAB}⏳ Finalizing telemetry report..."
post_update_to_api "failed" "$install_exit_code" "force"
$STD echo -e "${TAB}${CM:-} Telemetry finalized"
# Restore default job-control signal handling before exit
trap - TSTP TTIN TTOU
exit $install_exit_code
fi
@@ -4919,7 +4885,8 @@ create_lxc_container() {
return 0
fi
msg_info "An update for the Proxmox LXC stack is available"
echo
echo "An update for the Proxmox LXC stack is available:"
echo " pve-container: installed=${_pvec_i:-n/a} candidate=${_pvec_c:-n/a}"
echo " lxc-pve : installed=${_lxcp_i:-n/a} candidate=${_lxcp_c:-n/a}"
echo
@@ -4971,6 +4938,7 @@ create_lxc_container() {
exit 205
}
if qm status "$CTID" &>/dev/null || pct status "$CTID" &>/dev/null; then
echo -e "ID '$CTID' is already in use."
unset CTID
msg_error "Cannot use ID that is already in use."
exit 206
@@ -5028,40 +4996,17 @@ create_lxc_container() {
msg_info "Validating storage '$CONTAINER_STORAGE'"
STORAGE_TYPE=$(grep -E "^[^:]+: $CONTAINER_STORAGE$" /etc/pve/storage.cfg | cut -d: -f1 | head -1)
if [[ -z "$STORAGE_TYPE" ]]; then
msg_error "Storage '$CONTAINER_STORAGE' not found in /etc/pve/storage.cfg"
exit 213
fi
case "$STORAGE_TYPE" in
iscsidirect)
msg_error "Storage '$CONTAINER_STORAGE' uses iSCSI-direct which does not support container rootfs."
exit 212
;;
iscsi | zfs)
msg_error "Storage '$CONTAINER_STORAGE' ($STORAGE_TYPE) does not support container rootdir content."
exit 213
;;
cephfs)
msg_error "Storage '$CONTAINER_STORAGE' uses CephFS which is not supported for LXC rootfs."
exit 219
;;
pbs)
msg_error "Storage '$CONTAINER_STORAGE' is a Proxmox Backup Server — cannot be used for containers."
exit 224
;;
iscsidirect) exit 212 ;;
iscsi | zfs) exit 213 ;;
cephfs) exit 219 ;;
pbs) exit 224 ;;
linstor | rbd | nfs | cifs)
if ! pvesm status -storage "$CONTAINER_STORAGE" &>/dev/null; then
msg_error "Storage '$CONTAINER_STORAGE' ($STORAGE_TYPE) is not accessible or inactive."
exit 217
fi
pvesm status -storage "$CONTAINER_STORAGE" &>/dev/null || exit 217
;;
esac
if ! pvesm status -content rootdir 2>/dev/null | awk 'NR>1{print $1}' | grep -qx "$CONTAINER_STORAGE"; then
msg_error "Storage '$CONTAINER_STORAGE' ($STORAGE_TYPE) does not support 'rootdir' content."
exit 213
fi
pvesm status -content rootdir 2>/dev/null | awk 'NR>1{print $1}' | grep -qx "$CONTAINER_STORAGE" || exit 213
msg_ok "Storage '$CONTAINER_STORAGE' ($STORAGE_TYPE) validated"
msg_info "Validating template storage '$TEMPLATE_STORAGE'"
@@ -5134,7 +5079,8 @@ create_lxc_container() {
# If still no template, try to find alternatives
if [[ -z "$TEMPLATE" ]]; then
msg_warn "No template found for ${PCT_OSTYPE} ${PCT_OSVERSION}, searching for alternatives..."
echo ""
echo "[DEBUG] No template found for ${PCT_OSTYPE} ${PCT_OSVERSION}, searching for alternatives..."
# Get all available versions for this OS type
AVAILABLE_VERSIONS=()
@@ -5408,19 +5354,13 @@ create_lxc_container() {
if [[ ! -s "$TEMPLATE_PATH" || "$(stat -c%s "$TEMPLATE_PATH" 2>/dev/null || echo 0)" -lt 1000000 ]]; then
msg_info "Template file missing or too small downloading"
rm -f "$TEMPLATE_PATH"
pveam download "$TEMPLATE_STORAGE" "$TEMPLATE" >/dev/null 2>&1 || {
msg_error "Failed to download template '$TEMPLATE' to storage '$TEMPLATE_STORAGE'"
exit 222
}
pveam download "$TEMPLATE_STORAGE" "$TEMPLATE" >/dev/null 2>&1
msg_ok "Template downloaded"
elif ! tar -tf "$TEMPLATE_PATH" &>/dev/null; then
if [[ -n "$ONLINE_TEMPLATE" ]]; then
msg_info "Template appears corrupted re-downloading"
rm -f "$TEMPLATE_PATH"
pveam download "$TEMPLATE_STORAGE" "$TEMPLATE" >/dev/null 2>&1 || {
msg_error "Failed to re-download template '$TEMPLATE'"
exit 222
}
pveam download "$TEMPLATE_STORAGE" "$TEMPLATE" >/dev/null 2>&1
msg_ok "Template re-downloaded"
else
msg_warn "Template appears corrupted, but no online version exists. Skipping re-download."
@@ -5462,17 +5402,20 @@ create_lxc_container() {
if ! pct create "$CTID" "local:vztmpl/${TEMPLATE}" $PCT_OPTIONS >>"$LOGFILE" 2>&1; then
# Local fallback also failed - check for LXC stack version issue
if grep -qiE 'unsupported .* version' "$LOGFILE"; then
msg_warn "pct reported 'unsupported version' LXC stack might be too old for this template"
echo
echo "pct reported 'unsupported ... version' your LXC stack might be too old for this template."
echo "We can try to upgrade 'pve-container' and 'lxc-pve' now and retry automatically."
offer_lxc_stack_upgrade_and_maybe_retry "yes"
rc=$?
case $rc in
0) : ;; # success - container created, continue
2)
msg_error "Upgrade declined. Please update and re-run: apt update && apt install --only-upgrade pve-container lxc-pve"
echo "Upgrade was declined. Please update and re-run:
apt update && apt install --only-upgrade pve-container lxc-pve"
exit 231
;;
3)
msg_error "Upgrade and/or retry failed. Please inspect: $LOGFILE"
echo "Upgrade and/or retry failed. Please inspect: $LOGFILE"
exit 231
;;
esac
@@ -5491,17 +5434,20 @@ create_lxc_container() {
else
# Already on local storage and still failed - check LXC stack version
if grep -qiE 'unsupported .* version' "$LOGFILE"; then
msg_warn "pct reported 'unsupported version' LXC stack might be too old for this template"
echo
echo "pct reported 'unsupported ... version' your LXC stack might be too old for this template."
echo "We can try to upgrade 'pve-container' and 'lxc-pve' now and retry automatically."
offer_lxc_stack_upgrade_and_maybe_retry "yes"
rc=$?
case $rc in
0) : ;; # success - container created, continue
2)
msg_error "Upgrade declined. Please update and re-run: apt update && apt install --only-upgrade pve-container lxc-pve"
echo "Upgrade was declined. Please update and re-run:
apt update && apt install --only-upgrade pve-container lxc-pve"
exit 231
;;
3)
msg_error "Upgrade and/or retry failed. Please inspect: $LOGFILE"
echo "Upgrade and/or retry failed. Please inspect: $LOGFILE"
exit 231
;;
esac
@@ -5654,21 +5600,44 @@ ensure_log_on_host() {
fi
}
# ==============================================================================
# TRAP MANAGEMENT
# ==============================================================================
# All traps (ERR, EXIT, INT, TERM, HUP) are set by catch_errors() in
# error_handler.func — called at the top of this file after sourcing.
# ------------------------------------------------------------------------------
# api_exit_script()
#
# Do NOT set duplicate traps here. The handlers in error_handler.func
# (on_exit, on_interrupt, on_terminate, on_hangup, error_handler) already:
# - Send telemetry via post_update_to_api / _send_abort_telemetry
# - Stop orphaned containers via _stop_container_if_installing
# - Collect logs via ensure_log_on_host
# - Clean up lock files and spinner processes
#
# Previously, inline traps here overwrote catch_errors() traps, causing:
# - error_handler() never fired (no error output, no cleanup dialog)
# - on_hangup() never fired (SSH disconnect → stuck records)
# - Duplicated logic in two places (hard to debug)
# ==============================================================================
# - Exit trap handler for reporting to API telemetry
# - Captures exit code and reports to PocketBase using centralized error descriptions
# - Uses explain_exit_code() from api.func for consistent error messages
# - ALWAYS sends telemetry FIRST before log collection to prevent pct pull
# hangs from blocking status updates (container may be dead/unresponsive)
# - For non-zero exit codes: posts "failed" status
# - For zero exit codes where post_update_to_api was never called:
# catches orphaned "installing" records (e.g., script exited cleanly
# but description() was never reached)
# ------------------------------------------------------------------------------
api_exit_script() {
local exit_code=$?
if [ $exit_code -ne 0 ]; then
# ALWAYS send telemetry FIRST - ensure status is reported even if
# ensure_log_on_host hangs (e.g. pct pull on dead container)
post_update_to_api "failed" "$exit_code" 2>/dev/null || true
# Best-effort log collection (non-critical after telemetry is sent)
if declare -f ensure_log_on_host >/dev/null 2>&1; then
ensure_log_on_host 2>/dev/null || true
fi
# Stop orphaned container if we're in the install phase
if [[ "${CONTAINER_INSTALLING:-}" == "true" && -n "${CTID:-}" ]] && command -v pct &>/dev/null; then
pct stop "$CTID" 2>/dev/null || true
fi
elif [[ "${POST_TO_API_DONE:-}" == "true" && "${POST_UPDATE_DONE:-}" != "true" ]]; then
# Script exited with 0 but never sent a completion status
# exit_code=0 is never an error — report as success
post_update_to_api "done" "0"
fi
}
if command -v pveversion >/dev/null 2>&1; then
trap 'api_exit_script' EXIT
fi
trap 'local _ec=$?; if [[ $_ec -ne 0 ]]; then post_update_to_api "failed" "$_ec" 2>/dev/null || true; if declare -f ensure_log_on_host &>/dev/null; then ensure_log_on_host 2>/dev/null || true; fi; fi' ERR
trap 'post_update_to_api "failed" "129" 2>/dev/null || true; if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null; then pct stop "$CTID" 2>/dev/null || true; fi; exit 129' SIGHUP
trap 'post_update_to_api "failed" "130" 2>/dev/null || true; if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null; then pct stop "$CTID" 2>/dev/null || true; fi; exit 130' SIGINT
trap 'post_update_to_api "failed" "143" 2>/dev/null || true; if [[ -n "${CTID:-}" ]] && command -v pct &>/dev/null; then pct stop "$CTID" 2>/dev/null || true; fi; exit 143' SIGTERM

View File

@@ -276,7 +276,7 @@ shell_check() {
msg_error "Your default shell is currently not set to Bash. To use these scripts, please switch to the Bash shell."
echo -e "\nExiting..."
sleep 2
exit 1
exit
fi
}
@@ -293,7 +293,7 @@ root_check() {
msg_error "Please run this script as root."
echo -e "\nExiting..."
sleep 2
exit 1
exit
fi
}
@@ -345,10 +345,11 @@ pve_check() {
# ------------------------------------------------------------------------------
arch_check() {
if [ "$(dpkg --print-architecture)" != "amd64" ]; then
msg_error "This script will not work with PiMox (ARM architecture detected)."
msg_warn "Visit https://github.com/asylumexp/Proxmox for ARM64 support."
echo -e "\n ${INFO}${YWB}This script will not work with PiMox! \n"
echo -e "\n ${YWB}Visit https://github.com/asylumexp/Proxmox for ARM64 support. \n"
echo -e "Exiting..."
sleep 2
exit 1
exit
fi
}
@@ -489,8 +490,6 @@ log_section() {
# - Executes command with output redirected to active log file
# - On error: displays last 20 lines of log and exits with original exit code
# - Temporarily disables error trap to capture exit code correctly
# - Saves and restores previous error handling state (so callers that
# intentionally disabled error handling aren't silently re-enabled)
# - Sources explain_exit_code() for detailed error messages
# ------------------------------------------------------------------------------
silent() {
@@ -508,30 +507,19 @@ silent() {
return 0
fi
# Save current error handling state before disabling
# This prevents re-enabling error handling when the caller intentionally
# disabled it (e.g. build_container recovery section)
local _restore_errexit=false
[[ "$-" == *e* ]] && _restore_errexit=true
set +Eeuo pipefail
trap - ERR
"$@" >>"$logfile" 2>&1
local rc=$?
# Restore error handling ONLY if it was active before this call
if $_restore_errexit; then
set -Eeuo pipefail
trap 'error_handler' ERR
fi
set -Eeuo pipefail
trap 'error_handler' ERR
if [[ $rc -ne 0 ]]; then
# Source explain_exit_code if needed
if ! declare -f explain_exit_code >/dev/null 2>&1; then
if ! source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func); then
explain_exit_code() { echo "unknown (error_handler.func download failed)"; }
fi
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/error_handler.func)
fi
local explanation
@@ -608,7 +596,6 @@ stop_spinner() {
unset SPINNER_PID SPINNER_MSG
stty sane 2>/dev/null || true
stty -tostop 2>/dev/null || true
}
# ==============================================================================
@@ -786,8 +773,8 @@ fatal() {
# ------------------------------------------------------------------------------
exit_script() {
clear
msg_error "User exited script"
exit 0
echo -e "\n${CROSS}${RD}User exited script${CL}\n"
exit
}
# ------------------------------------------------------------------------------
@@ -808,7 +795,6 @@ get_header() {
if [ ! -s "$local_header_path" ]; then
if ! curl -fsSL "$header_url" -o "$local_header_path"; then
msg_warn "Failed to download header: $header_url"
return 1
fi
fi
@@ -849,10 +835,10 @@ header_info() {
ensure_tput() {
if ! command -v tput >/dev/null 2>&1; then
if grep -qi 'alpine' /etc/os-release; then
apk add --no-cache ncurses >/dev/null 2>&1 || msg_warn "Failed to install ncurses (tput may be unavailable)"
apk add --no-cache ncurses >/dev/null 2>&1
elif command -v apt-get >/dev/null 2>&1; then
apt-get update -qq >/dev/null
apt-get install -y -qq ncurses-bin >/dev/null 2>&1 || msg_warn "Failed to install ncurses-bin (tput may be unavailable)"
apt-get install -y -qq ncurses-bin >/dev/null 2>&1
fi
fi
}
@@ -1312,7 +1298,6 @@ prompt_select() {
# Validate options
if [[ $num_options -eq 0 ]]; then
msg_warn "prompt_select called with no options"
echo "" >&2
return 1
fi
@@ -1555,30 +1540,22 @@ check_or_create_swap() {
local swap_size_mb
swap_size_mb=$(prompt_input "Enter swap size in MB (e.g., 2048 for 2GB):" "2048" 60)
if ! [[ "$swap_size_mb" =~ ^[0-9]+$ ]]; then
msg_error "Invalid swap size: '${swap_size_mb}' (must be a number in MB)"
msg_error "Invalid size input. Aborting."
return 1
fi
local swap_file="/swapfile"
msg_info "Creating ${swap_size_mb}MB swap file at $swap_file"
if ! dd if=/dev/zero of="$swap_file" bs=1M count="$swap_size_mb" status=progress; then
msg_error "Failed to allocate swap file (dd failed)"
if dd if=/dev/zero of="$swap_file" bs=1M count="$swap_size_mb" status=progress &&
chmod 600 "$swap_file" &&
mkswap "$swap_file" &&
swapon "$swap_file"; then
msg_ok "Swap file created and activated successfully"
else
msg_error "Failed to create or activate swap"
return 1
fi
if ! chmod 600 "$swap_file"; then
msg_error "Failed to set permissions on $swap_file"
return 1
fi
if ! mkswap "$swap_file"; then
msg_error "Failed to format swap file (mkswap failed)"
return 1
fi
if ! swapon "$swap_file"; then
msg_error "Failed to activate swap (swapon failed)"
return 1
fi
msg_ok "Swap file created and activated successfully"
}
# ------------------------------------------------------------------------------
@@ -1660,7 +1637,7 @@ function get_lxc_ip() {
LOCAL_IP="$(get_current_ip || true)"
if [[ -z "$LOCAL_IP" ]]; then
msg_error "Could not determine LOCAL_IP (checked: eth0, hostname -I, ip route, IPv6 targets)"
msg_error "Could not determine LOCAL_IP"
return 1
fi
fi

View File

@@ -199,16 +199,11 @@ error_handler() {
return 0
fi
# Stop spinner and restore cursor FIRST — before any output
# This prevents spinner text overlapping with error messages
if declare -f stop_spinner >/dev/null 2>&1; then
stop_spinner 2>/dev/null || true
fi
printf "\e[?25h"
local explanation
explanation="$(explain_exit_code "$exit_code")"
printf "\e[?25h"
# ALWAYS report failure to API immediately - don't wait for container checks
# This ensures we capture failures that occur before/after container exists
if declare -f post_update_to_api &>/dev/null; then
@@ -364,39 +359,9 @@ _send_abort_telemetry() {
command -v curl &>/dev/null || return 0
[[ "${DIAGNOSTICS:-no}" == "no" ]] && return 0
[[ -z "${RANDOM_UUID:-}" ]] && return 0
# Collect last 20 log lines for error diagnosis (best-effort)
local error_text=""
if [[ -n "${INSTALL_LOG:-}" && -s "${INSTALL_LOG}" ]]; then
error_text=$(tail -n 20 "$INSTALL_LOG" 2>/dev/null | sed 's/\x1b\[[0-9;]*[a-zA-Z]//g; s/\\/\\\\/g; s/"/\\"/g; s/\r//g' | tr '\n' '|' | sed 's/|$//' | tr -d '\000-\010\013\014\016-\037\177') || true
fi
# Calculate duration if start time is available
local duration=""
if [[ -n "${DIAGNOSTICS_START_TIME:-}" ]]; then
duration=$(($(date +%s) - DIAGNOSTICS_START_TIME))
fi
# Build JSON payload with error context
local payload
payload="{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"${TELEMETRY_TYPE:-lxc}\",\"nsapp\":\"${NSAPP:-${app:-unknown}}\",\"status\":\"failed\",\"exit_code\":${exit_code}"
[[ -n "$error_text" ]] && payload="${payload},\"error\":\"${error_text}\""
[[ -n "$duration" ]] && payload="${payload},\"duration\":${duration}"
payload="${payload}}"
local api_url="${TELEMETRY_URL:-https://telemetry.community-scripts.org/telemetry}"
# 2 attempts (retry once on failure) — original had no retry
local attempt
for attempt in 1 2; do
if curl -fsS -m 5 -X POST "$api_url" \
-H "Content-Type: application/json" \
-d "$payload" &>/dev/null; then
return 0
fi
[[ $attempt -eq 1 ]] && sleep 1
done
return 0
curl -fsS -m 5 -X POST "${TELEMETRY_URL:-https://telemetry.community-scripts.org/telemetry}" \
-H "Content-Type: application/json" \
-d "{\"random_id\":\"${RANDOM_UUID}\",\"execution_id\":\"${EXECUTION_ID:-${RANDOM_UUID}}\",\"type\":\"${TELEMETRY_TYPE:-lxc}\",\"nsapp\":\"${NSAPP:-${app:-unknown}}\",\"status\":\"failed\",\"exit_code\":${exit_code}}" &>/dev/null || true
}
# ------------------------------------------------------------------------------
@@ -472,12 +437,6 @@ on_exit() {
# - Exits with code 130 (128 + SIGINT=2)
# ------------------------------------------------------------------------------
on_interrupt() {
# Stop spinner and restore cursor before any output
if declare -f stop_spinner >/dev/null 2>&1; then
stop_spinner 2>/dev/null || true
fi
printf "\e[?25h" 2>/dev/null || true
_send_abort_telemetry "130"
_stop_container_if_installing
if declare -f msg_error >/dev/null 2>&1; then
@@ -497,12 +456,6 @@ on_interrupt() {
# - Exits with code 143 (128 + SIGTERM=15)
# ------------------------------------------------------------------------------
on_terminate() {
# Stop spinner and restore cursor before any output
if declare -f stop_spinner >/dev/null 2>&1; then
stop_spinner 2>/dev/null || true
fi
printf "\e[?25h" 2>/dev/null || true
_send_abort_telemetry "143"
_stop_container_if_installing
if declare -f msg_error >/dev/null 2>&1; then
@@ -525,11 +478,6 @@ on_terminate() {
# - Exits with code 129 (128 + SIGHUP=1)
# ------------------------------------------------------------------------------
on_hangup() {
# Stop spinner (no cursor restore needed — terminal is already gone)
if declare -f stop_spinner >/dev/null 2>&1; then
stop_spinner 2>/dev/null || true
fi
_send_abort_telemetry "129"
_stop_container_if_installing
exit 129

View File

@@ -201,7 +201,6 @@ install_packages_with_retry() {
fi
done
msg_error "Failed to install packages after $((max_retries + 1)) attempts: ${packages[*]}"
return 1
}
@@ -232,7 +231,6 @@ upgrade_packages_with_retry() {
fi
done
msg_error "Failed to upgrade packages after $((max_retries + 1)) attempts: ${packages[*]}"
return 1
}
@@ -677,7 +675,6 @@ verify_repo_available() {
if curl -fsSL --max-time 10 "${repo_url}/dists/${suite}/Release" &>/dev/null; then
return 0
fi
msg_warn "Repository not available: ${repo_url} (suite: ${suite})"
return 1
}
@@ -786,25 +783,16 @@ github_api_call() {
for attempt in $(seq 1 $max_retries); do
local http_code
http_code=$(curl -sSL -w "%{http_code}" -o "$output_file" \
http_code=$(curl -fsSL -w "%{http_code}" -o "$output_file" \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"${header_args[@]}" \
"$url" 2>/dev/null) || true
"$url" 2>/dev/null || echo "000")
case "$http_code" in
200)
return 0
;;
401)
msg_error "GitHub API authentication failed (HTTP 401)."
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
msg_error "Your GITHUB_TOKEN appears to be invalid or expired."
else
msg_error "The repository may require authentication. Try: export GITHUB_TOKEN=\"ghp_your_token\""
fi
return 1
;;
403)
# Rate limit - check if we can retry
if [[ $attempt -lt $max_retries ]]; then
@@ -813,22 +801,11 @@ github_api_call() {
retry_delay=$((retry_delay * 2))
continue
fi
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
msg_error "GitHub API rate limit exceeded. Set GITHUB_TOKEN to increase limits."
return 1
;;
404)
msg_error "GitHub repository or release not found (HTTP 404): $url"
return 1
;;
000 | "")
if [[ $attempt -lt $max_retries ]]; then
sleep "$retry_delay"
continue
fi
msg_error "GitHub API connection failed (no response)."
msg_error "Check your network/DNS: curl -sSL https://api.github.com/rate_limit"
msg_error "GitHub API endpoint not found: $url"
return 1
;;
*)
@@ -836,13 +813,12 @@ github_api_call() {
sleep "$retry_delay"
continue
fi
msg_error "GitHub API call failed (HTTP $http_code)."
msg_error "GitHub API call failed with HTTP $http_code"
return 1
;;
esac
done
msg_error "GitHub API call failed after ${max_retries} attempts: ${url}"
return 1
}
@@ -857,18 +833,14 @@ codeberg_api_call() {
for attempt in $(seq 1 $max_retries); do
local http_code
http_code=$(curl -sSL -w "%{http_code}" -o "$output_file" \
http_code=$(curl -fsSL -w "%{http_code}" -o "$output_file" \
-H "Accept: application/json" \
"$url" 2>/dev/null) || true
"$url" 2>/dev/null || echo "000")
case "$http_code" in
200)
return 0
;;
401)
msg_error "Codeberg API authentication failed (HTTP 401)."
return 1
;;
403)
# Rate limit - retry
if [[ $attempt -lt $max_retries ]]; then
@@ -877,20 +849,11 @@ codeberg_api_call() {
retry_delay=$((retry_delay * 2))
continue
fi
msg_error "Codeberg API rate limit exceeded (HTTP 403)."
msg_error "Codeberg API rate limit exceeded."
return 1
;;
404)
msg_error "Codeberg repository or release not found (HTTP 404): $url"
return 1
;;
000 | "")
if [[ $attempt -lt $max_retries ]]; then
sleep "$retry_delay"
continue
fi
msg_error "Codeberg API connection failed (no response)."
msg_error "Check your network/DNS: curl -sSL https://codeberg.org"
msg_error "Codeberg API endpoint not found: $url"
return 1
;;
*)
@@ -898,13 +861,12 @@ codeberg_api_call() {
sleep "$retry_delay"
continue
fi
msg_error "Codeberg API call failed (HTTP $http_code)."
msg_error "Codeberg API call failed with HTTP $http_code"
return 1
;;
esac
done
msg_error "Codeberg API call failed after ${max_retries} attempts: ${url}"
return 1
}
@@ -1374,9 +1336,7 @@ setup_deb822_repo() {
[[ -n "$enabled" ]] && echo "Enabled: $enabled"
} >/etc/apt/sources.list.d/${name}.sources
$STD apt update || {
msg_warn "apt update failed after adding repository: ${name}"
}
$STD apt update
}
# ------------------------------------------------------------------------------
@@ -1384,16 +1344,12 @@ setup_deb822_repo() {
# ------------------------------------------------------------------------------
hold_package_version() {
local package="$1"
$STD apt-mark hold "$package" || {
msg_warn "Failed to hold package version: ${package}"
}
$STD apt-mark hold "$package"
}
unhold_package_version() {
local package="$1"
$STD apt-mark unhold "$package" || {
msg_warn "Failed to unhold package version: ${package}"
}
$STD apt-mark unhold "$package"
}
# ------------------------------------------------------------------------------
@@ -1423,7 +1379,6 @@ enable_and_start_service() {
local service="$1"
if ! systemctl enable "$service" &>/dev/null; then
msg_error "Failed to enable service: $service"
return 1
fi
@@ -1466,7 +1421,6 @@ extract_version_from_json() {
version=$(echo "$json" | jq -r ".${field} // empty")
if [[ -z "$version" ]]; then
msg_warn "JSON field '${field}' is empty in API response"
return 1
fi
@@ -1486,7 +1440,6 @@ get_latest_github_release() {
local temp_file=$(mktemp)
if ! github_api_call "https://api.github.com/repos/${repo}/releases/latest" "$temp_file"; then
msg_warn "GitHub API call failed for ${repo}"
rm -f "$temp_file"
return 1
fi
@@ -1496,7 +1449,6 @@ get_latest_github_release() {
rm -f "$temp_file"
if [[ -z "$version" ]]; then
msg_error "Could not determine latest version for ${repo}"
return 1
fi
@@ -1513,7 +1465,6 @@ get_latest_codeberg_release() {
# Codeberg API: get all releases and pick the first non-draft/non-prerelease
if ! codeberg_api_call "https://codeberg.org/api/v1/repos/${repo}/releases" "$temp_file"; then
msg_warn "Codeberg API call failed for ${repo}"
rm -f "$temp_file"
return 1
fi
@@ -1529,7 +1480,6 @@ get_latest_codeberg_release() {
rm -f "$temp_file"
if [[ -z "$version" ]]; then
msg_error "Could not determine latest version for ${repo}"
return 1
fi
@@ -1617,34 +1567,13 @@ get_latest_gh_tag() {
"${header_args[@]}" \
"https://api.github.com/repos/${repo}/tags?per_page=100" 2>/dev/null) || true
if [[ "$http_code" == "401" ]]; then
msg_error "GitHub API authentication failed (HTTP 401)."
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
msg_error "Your GITHUB_TOKEN appears to be invalid or expired."
else
msg_error "The repository may require authentication. Try: export GITHUB_TOKEN=\"ghp_your_token\""
fi
rm -f /tmp/gh_tags.json
return 1
fi
if [[ "$http_code" == "403" ]]; then
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
rm -f /tmp/gh_tags.json
return 1
fi
if [[ "$http_code" == "000" || -z "$http_code" ]]; then
msg_error "GitHub API connection failed (no response)."
msg_error "Check your network/DNS: curl -sSL https://api.github.com/rate_limit"
msg_warn "GitHub API rate limit exceeded while fetching tags for ${repo}"
rm -f /tmp/gh_tags.json
return 1
fi
if [[ "$http_code" != "200" ]] || [[ ! -s /tmp/gh_tags.json ]]; then
msg_error "Unable to fetch tags for ${repo} (HTTP ${http_code})"
rm -f /tmp/gh_tags.json
return 1
fi
@@ -1661,7 +1590,6 @@ get_latest_gh_tag() {
sort -V | tail -n1)
if [[ -z "$latest" ]]; then
msg_warn "No matching tags found for ${repo}${prefix:+ (prefix: $prefix)}"
return 1
fi
@@ -1731,15 +1659,6 @@ check_for_gh_release() {
if [[ "$http_code" == "200" ]] && [[ -s /tmp/gh_check.json ]]; then
releases_json="[$(</tmp/gh_check.json)]"
elif [[ "$http_code" == "401" ]]; then
msg_error "GitHub API authentication failed (HTTP 401)."
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
msg_error "Your GITHUB_TOKEN appears to be invalid or expired."
else
msg_error "The repository may require authentication. Try: export GITHUB_TOKEN=\"ghp_your_token\""
fi
rm -f /tmp/gh_check.json
return 1
elif [[ "$http_code" == "403" ]]; then
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
@@ -1760,26 +1679,12 @@ check_for_gh_release() {
if [[ "$http_code" == "200" ]] && [[ -s /tmp/gh_check.json ]]; then
releases_json=$(</tmp/gh_check.json)
elif [[ "$http_code" == "401" ]]; then
msg_error "GitHub API authentication failed (HTTP 401)."
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
msg_error "Your GITHUB_TOKEN appears to be invalid or expired."
else
msg_error "The repository may require authentication. Try: export GITHUB_TOKEN=\"ghp_your_token\""
fi
rm -f /tmp/gh_check.json
return 1
elif [[ "$http_code" == "403" ]]; then
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
rm -f /tmp/gh_check.json
return 1
elif [[ "$http_code" == "000" || -z "$http_code" ]]; then
msg_error "GitHub API connection failed (no response)."
msg_error "Check your network/DNS: curl -sSL https://api.github.com/rate_limit"
rm -f /tmp/gh_check.json
return 1
else
msg_error "Unable to fetch releases for ${app} (HTTP ${http_code})"
rm -f /tmp/gh_check.json
@@ -1897,7 +1802,7 @@ check_for_codeberg_release() {
releases_json=$(curl -fsSL --max-time 20 \
-H 'Accept: application/json' \
"https://codeberg.org/api/v1/repos/${source}/releases" 2>/dev/null) || {
msg_error "Unable to fetch releases for ${app} (codeberg.org/api/v1/repos/${source}/releases)"
msg_error "Unable to fetch releases for ${app}"
return 1
}
@@ -2030,12 +1935,12 @@ function download_with_progress() {
if [[ -z "$content_length" ]]; then
if ! curl -fL# -o "$output" "$url"; then
msg_error "Download failed: $url"
msg_error "Download failed"
return 1
fi
else
if ! curl -fsSL "$url" | pv -s "$content_length" >"$output"; then
msg_error "Download failed: $url"
msg_error "Download failed"
return 1
fi
fi
@@ -2578,10 +2483,7 @@ _gh_scan_older_releases() {
-H 'Accept: application/vnd.github+json' \
-H 'X-GitHub-Api-Version: 2022-11-28' \
"${header[@]}" \
"https://api.github.com/repos/${repo}/releases?per_page=15" 2>/dev/null) || {
msg_warn "Failed to fetch older releases for ${repo}"
return 1
}
"https://api.github.com/repos/${repo}/releases?per_page=15" 2>/dev/null) || return 1
local count
count=$(echo "$releases_list" | jq 'length')
@@ -2706,22 +2608,12 @@ function fetch_and_deploy_gh_release() {
done
if ! $success; then
if [[ "$http_code" == "401" ]]; then
msg_error "GitHub API authentication failed (HTTP 401)."
if [[ -n "${GITHUB_TOKEN:-}" ]]; then
msg_error "Your GITHUB_TOKEN appears to be invalid or expired."
else
msg_error "The repository may require authentication. Try: export GITHUB_TOKEN=\"ghp_your_token\""
fi
elif [[ "$http_code" == "403" ]]; then
if [[ "$http_code" == "403" ]]; then
msg_error "GitHub API rate limit exceeded (HTTP 403)."
msg_error "To increase the limit, export a GitHub token before running the script:"
msg_error " export GITHUB_TOKEN=\"ghp_your_token_here\""
elif [[ "$http_code" == "000" || -z "$http_code" ]]; then
msg_error "GitHub API connection failed (no response)."
msg_error "Check your network/DNS: curl -sSL https://api.github.com/rate_limit"
else
msg_error "Failed to fetch release metadata (HTTP $http_code)"
msg_error "Failed to fetch release metadata from $api_url after $max_retries attempts (HTTP $http_code)"
fi
return 1
fi
@@ -3123,9 +3015,7 @@ function setup_composer() {
# Scenario 1: Already installed - just self-update
if [[ -n "$INSTALLED_VERSION" ]]; then
msg_info "Update Composer $INSTALLED_VERSION"
$STD "$COMPOSER_BIN" self-update --no-interaction || {
msg_warn "Composer self-update failed, continuing with current version"
}
$STD "$COMPOSER_BIN" self-update --no-interaction || true
local UPDATED_VERSION
UPDATED_VERSION=$("$COMPOSER_BIN" --version 2>/dev/null | awk '{print $3}')
cache_installed_version "composer" "$UPDATED_VERSION"
@@ -3161,9 +3051,7 @@ function setup_composer() {
fi
chmod +x "$COMPOSER_BIN"
$STD "$COMPOSER_BIN" self-update --no-interaction || {
msg_warn "Composer self-update failed after fresh install"
}
$STD "$COMPOSER_BIN" self-update --no-interaction || true
local FINAL_VERSION
FINAL_VERSION=$("$COMPOSER_BIN" --version 2>/dev/null | awk '{print $3}')
@@ -5234,9 +5122,7 @@ function setup_mysql() {
ensure_apt_working || return 1
# Perform upgrade with retry logic (non-fatal if fails)
upgrade_packages_with_retry "mysql-server" "mysql-client" || {
msg_warn "MySQL package upgrade had issues, continuing with current version"
}
upgrade_packages_with_retry "mysql-server" "mysql-client" || true
cache_installed_version "mysql" "$MYSQL_VERSION"
msg_ok "Update MySQL $MYSQL_VERSION"
@@ -5426,9 +5312,7 @@ function setup_nodejs() {
}
# Force APT cache refresh after repository setup
$STD apt update || {
msg_warn "apt update failed after Node.js repository setup"
}
$STD apt update
ensure_dependencies curl ca-certificates gnupg
@@ -5671,10 +5555,7 @@ EOF
if [[ "$DISTRO_ID" == "ubuntu" ]]; then
# Ubuntu: Use ondrej/php PPA
msg_info "Adding ondrej/php PPA for Ubuntu"
$STD apt install -y software-properties-common || {
msg_error "Failed to install software-properties-common"
return 1
}
$STD apt install -y software-properties-common
# Don't use $STD for add-apt-repository as it uses background processes
add-apt-repository -y ppa:ondrej/php >>"$(get_active_logfile)" 2>&1
else
@@ -5685,9 +5566,7 @@ EOF
}
fi
ensure_apt_working || return 1
$STD apt update || {
msg_warn "apt update failed after PHP repository setup"
}
$STD apt update
# Get available PHP version from repository
local AVAILABLE_PHP_VERSION=""
@@ -5982,9 +5861,7 @@ function setup_postgresql() {
}
fi
$STD systemctl enable --now postgresql 2>/dev/null || {
msg_warn "Failed to enable/start PostgreSQL service"
}
$STD systemctl enable --now postgresql 2>/dev/null || true
# Add PostgreSQL binaries to PATH
if ! grep -q '/usr/lib/postgresql' /etc/environment 2>/dev/null; then
@@ -5998,9 +5875,7 @@ function setup_postgresql() {
if [[ -n "$PG_MODULES" ]]; then
IFS=',' read -ra MODULES <<<"$PG_MODULES"
for module in "${MODULES[@]}"; do
$STD apt install -y "postgresql-${PG_VERSION}-${module}" 2>/dev/null || {
msg_warn "Failed to install PostgreSQL module: ${module}"
}
$STD apt install -y "postgresql-${PG_VERSION}-${module}" 2>/dev/null || true
done
fi
}
@@ -6659,9 +6534,7 @@ function setup_clickhouse() {
ensure_apt_working || return 1
# Perform upgrade with retry logic (non-fatal if fails)
upgrade_packages_with_retry "clickhouse-server" "clickhouse-client" || {
msg_warn "ClickHouse package upgrade had issues, continuing with current version"
}
upgrade_packages_with_retry "clickhouse-server" "clickhouse-client" || true
cache_installed_version "clickhouse" "$CLICKHOUSE_VERSION"
msg_ok "Update ClickHouse $CLICKHOUSE_VERSION"
return 0
@@ -6796,9 +6669,7 @@ function setup_rust() {
}
# Update to latest patch version
$STD rustup update "$RUST_TOOLCHAIN" </dev/null || {
msg_warn "Rust toolchain update had issues"
}
$STD rustup update "$RUST_TOOLCHAIN" </dev/null || true
# Ensure PATH is updated for current shell session
export PATH="$CARGO_BIN:$PATH"
@@ -7200,10 +7071,7 @@ function setup_docker() {
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin || {
msg_error "Failed to update Docker packages"
return 1
}
docker-compose-plugin
msg_ok "Updated Docker to $DOCKER_LATEST_VERSION"
else
msg_ok "Docker is up-to-date ($DOCKER_CURRENT_VERSION)"
@@ -7215,10 +7083,7 @@ function setup_docker() {
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin || {
msg_error "Failed to install Docker packages"
return 1
}
docker-compose-plugin
DOCKER_CURRENT_VERSION=$(docker --version | grep -oP '\d+\.\d+\.\d+' | head -1)
msg_ok "Installed Docker $DOCKER_CURRENT_VERSION"

View File

@@ -28,7 +28,7 @@ INSTALL_PATH="/opt/immich-proxy"
CONFIG_PATH="/opt/immich-proxy/app"
DEFAULT_PORT=3000
# Initialize all core functions (colors, formatting, icons, $STD mode)
# Initialize all core functions (colors, formatting, icons, STD mode)
load_functions
init_tool_telemetry "" "addon"

View File

@@ -288,8 +288,8 @@ function default_settings() {
echo -e "${DGN}Using Hostname: ${BGN}${HN}${CL}"
echo -e "${DGN}Allocated Cores: ${BGN}${CORE_COUNT}${CL}"
echo -e "${DGN}Allocated RAM: ${BGN}${RAM_SIZE}${CL}"
if ! ip link show "${BRG}" &>/dev/null; then
msg_error "Bridge '${BRG}' does not exist"
if ! grep -q "^iface ${BRG}" /etc/network/interfaces; then
msg_error "Bridge '${BRG}' does not exist in /etc/network/interfaces"
exit
else
echo -e "${DGN}Using LAN Bridge: ${BGN}${BRG}${CL}"
@@ -305,8 +305,8 @@ function default_settings() {
if [ "$NETWORK_MODE" = "dual" ]; then
echo -e "${DGN}Network Mode: ${BGN}Dual Interface (Firewall)${CL}"
echo -e "${DGN}Using WAN MAC Address: ${BGN}${WAN_MAC}${CL}"
if ! ip link show "${WAN_BRG}" &>/dev/null; then
msg_error "Bridge '${WAN_BRG}' does not exist"
if ! grep -q "^iface ${WAN_BRG}" /etc/network/interfaces; then
msg_error "Bridge '${WAN_BRG}' does not exist in /etc/network/interfaces"
exit
else
echo -e "${DGN}Using WAN Bridge: ${BGN}${WAN_BRG}${CL}"
@@ -424,8 +424,8 @@ function advanced_settings() {
if [ -z $BRG ]; then
BRG="vmbr0"
fi
if ! ip link show "${BRG}" &>/dev/null; then
msg_error "Bridge '${BRG}' does not exist"
if ! grep -q "^iface ${BRG}" /etc/network/interfaces; then
msg_error "Bridge '${BRG}' does not exist in /etc/network/interfaces"
exit
fi
echo -e "${DGN}Using LAN Bridge: ${BGN}$BRG${CL}"
@@ -474,8 +474,8 @@ function advanced_settings() {
if [ -z $WAN_BRG ]; then
WAN_BRG="vmbr1"
fi
if ! ip link show "${WAN_BRG}" &>/dev/null; then
msg_error "WAN Bridge '${WAN_BRG}' does not exist"
if ! grep -q "^iface ${WAN_BRG}" /etc/network/interfaces; then
msg_error "WAN Bridge '${WAN_BRG}' does not exist in /etc/network/interfaces"
exit
fi
echo -e "${DGN}Using WAN Bridge: ${BGN}$WAN_BRG${CL}"