Sonarr failes to install LXC #1049

Closed
opened 2026-02-04 22:47:33 +03:00 by OVERLORD · 7 comments
Owner

Originally created by @gabrielrinaldi on GitHub (Jun 1, 2025).

Have you read and understood the above guidelines?

yes

📜 What is the name of the script you are using?

Sonarr

📂 What was the exact command used to execute the script?

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/sonarr.sh)"

⚙️ What settings are you using?

  • Default Settings
  • Advanced Settings

🖥️ Which Linux distribution are you using?

Debian 12

📝 Provide a clear and concise description of the issue.

Sonarr script fails to install LXC, other scripts using the same image install without any issue.

🔄 Steps to reproduce the issue.

  1. Run the script
  2. Chose storage (both my options local and cache/NAS do not work)

Paste the full error output (if available).

   _____                            
  / ___/____  ____  ____ ___________
  \__ \/ __ \/ __ \/ __ `/ ___/ ___/
 ___/ / /_/ / / / / /_/ / /  / /    
/____/\____/_/ /_/\__,_/_/  /_/     
                                    
  🧩  Using Advanced Settings on node tatooine
  🖥️  Operating System: debian
  🌟  Version: 12
  📦  Container Type: Unprivileged
  🔐  Root Password: ********
  🆔  Container ID: 117
  🏠  Hostname: sonarr
  💾  Disk Size: 4 GB
  🧠  CPU Cores: 2
  🛠️  RAM Size: 1024 MiB
  🌉  Bridge: vmbr0
  📡  IP Address: 10.10.0.117/24
  🌐  Gateway IP Address: 10.10.0.1
  📡  APT-Cacher IP Address: 10.10.0.155
  🚫  Disable IPv6: yes
  ⚙️  Interface MTU Size: Default
  🔍  DNS Search Domain: Host
  📡  DNS Server IP Address: Host
  🏷️  Vlan: Default
  📡  Tags: service
  🔑  Root SSH Access: no
  🗂️  Enable FUSE Support: no
  🔍  Verbose Mode: yes
  🚀  Creating a Sonarr LXC using the above advanced settings
  ✔️  Using cache for Template Storage.
  ✔️  Using ceph for Container Storage.
  ✔️  Updated LXC Template List
  ✔️  LXC Template is ready to use.
  ✖️  Container creation failed. Checking if template is corrupted.
  ✖️  Container creation failed, but template is not corrupted.
curl: (22) The requested URL returned error: 400

[ERROR] in line 1123: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/create_lxc.sh)" $?

🖼️ Additional context (optional).

Based on my testing the script is failing on LXC creation, I could not find vztmpl on my Proxmox or NAS after running the script.

Disabling AdGuard does not seem to have any effect as well nor manually setting a DNS like 1.1.1.1.

Originally created by @gabrielrinaldi on GitHub (Jun 1, 2025). ### ✅ Have you read and understood the above guidelines? yes ### 📜 What is the name of the script you are using? Sonarr ### 📂 What was the exact command used to execute the script? bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/sonarr.sh)" ### ⚙️ What settings are you using? - [ ] Default Settings - [x] Advanced Settings ### 🖥️ Which Linux distribution are you using? Debian 12 ### 📝 Provide a clear and concise description of the issue. Sonarr script fails to install LXC, other scripts using the same image install without any issue. ### 🔄 Steps to reproduce the issue. 1. Run the script 2. Chose storage (both my options `local` and `cache`/NAS do not work) ### ❌ Paste the full error output (if available). ``` _____ / ___/____ ____ ____ ___________ \__ \/ __ \/ __ \/ __ `/ ___/ ___/ ___/ / /_/ / / / / /_/ / / / / /____/\____/_/ /_/\__,_/_/ /_/ 🧩 Using Advanced Settings on node tatooine 🖥️ Operating System: debian 🌟 Version: 12 📦 Container Type: Unprivileged 🔐 Root Password: ******** 🆔 Container ID: 117 🏠 Hostname: sonarr 💾 Disk Size: 4 GB 🧠 CPU Cores: 2 🛠️ RAM Size: 1024 MiB 🌉 Bridge: vmbr0 📡 IP Address: 10.10.0.117/24 🌐 Gateway IP Address: 10.10.0.1 📡 APT-Cacher IP Address: 10.10.0.155 🚫 Disable IPv6: yes ⚙️ Interface MTU Size: Default 🔍 DNS Search Domain: Host 📡 DNS Server IP Address: Host 🏷️ Vlan: Default 📡 Tags: service 🔑 Root SSH Access: no 🗂️ Enable FUSE Support: no 🔍 Verbose Mode: yes 🚀 Creating a Sonarr LXC using the above advanced settings ✔️ Using cache for Template Storage. ✔️ Using ceph for Container Storage. ✔️ Updated LXC Template List ✔️ LXC Template is ready to use. ✖️ Container creation failed. Checking if template is corrupted. ✖️ Container creation failed, but template is not corrupted. curl: (22) The requested URL returned error: 400 [ERROR] in line 1123: exit code 0: while executing command bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/create_lxc.sh)" $? ``` ### 🖼️ Additional context (optional). Based on my testing the script is failing on LXC creation, I could not find `vztmpl` on my Proxmox or NAS after running the script. Disabling AdGuard does not seem to have any effect as well nor manually setting a DNS like `1.1.1.1`.
OVERLORD added the bug label 2026-02-04 22:47:33 +03:00
Author
Owner

@MickLesk commented on GitHub (Jun 1, 2025):

Do you have updated your Proxmox (OS)? This issue its an Part of this. Thats Not an issue with an Script, its before an LXC generated

Try at first reboot your Proxmox.

If it not work:
pveam available
pveam list local

@MickLesk commented on GitHub (Jun 1, 2025): Do you have updated your Proxmox (OS)? This issue its an Part of this. Thats Not an issue with an Script, its before an LXC generated Try at first reboot your Proxmox. If it not work: pveam available pveam list local
Author
Owner

@gabrielrinaldi commented on GitHub (Jun 1, 2025):

Thanks for the reply!

I am on Proxmox 8.4.1, fresh install.

Rebooting did not fix the issue, I also have 3 nodes in my cluster using ceph, none of them can complete the install.

pveam available -section system does work and returns the correct image:

system          almalinux-9-default_20240911_amd64.tar.xz
system          alpine-3.19-default_20240207_amd64.tar.xz
system          alpine-3.20-default_20240908_amd64.tar.xz
system          alpine-3.21-default_20241217_amd64.tar.xz
system          archlinux-base_20240911-1_amd64.tar.zst
system          centos-9-stream-default_20240828_amd64.tar.xz
system          debian-11-standard_11.7-1_amd64.tar.zst
system          debian-12-standard_12.7-1_amd64.tar.zst
system          devuan-5.0-standard_5.0_amd64.tar.gz
system          fedora-41-default_20241118_amd64.tar.xz
system          fedora-42-default_20250428_amd64.tar.xz
system          gentoo-current-openrc_20250508_amd64.tar.xz
system          openeuler-24.03-default_20250507_amd64.tar.xz
system          openeuler-25.03-default_20250507_amd64.tar.xz
system          opensuse-15.6-default_20240910_amd64.tar.xz
system          rockylinux-9-default_20240912_amd64.tar.xz
system          ubuntu-20.04-standard_20.04-1_amd64.tar.gz
system          ubuntu-22.04-standard_22.04-1_amd64.tar.zst
system          ubuntu-24.04-standard_24.04-2_amd64.tar.zst
system          ubuntu-24.10-standard_24.10-1_amd64.tar.zst
system          ubuntu-25.04-standard_25.04-1.1_amd64.tar.zst

And I can see it on my cache using pveam list cache | grep debian-12-standard_12.7-1:

cache:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst         120.65MB

All other *arr LXCs I've tried worked, this is the only one that I am having an issue with.

@gabrielrinaldi commented on GitHub (Jun 1, 2025): Thanks for the reply! I am on Proxmox 8.4.1, fresh install. Rebooting did not fix the issue, I also have 3 nodes in my cluster using `ceph`, none of them can complete the install. `pveam available -section system` does work and returns the correct image: ``` system almalinux-9-default_20240911_amd64.tar.xz system alpine-3.19-default_20240207_amd64.tar.xz system alpine-3.20-default_20240908_amd64.tar.xz system alpine-3.21-default_20241217_amd64.tar.xz system archlinux-base_20240911-1_amd64.tar.zst system centos-9-stream-default_20240828_amd64.tar.xz system debian-11-standard_11.7-1_amd64.tar.zst system debian-12-standard_12.7-1_amd64.tar.zst system devuan-5.0-standard_5.0_amd64.tar.gz system fedora-41-default_20241118_amd64.tar.xz system fedora-42-default_20250428_amd64.tar.xz system gentoo-current-openrc_20250508_amd64.tar.xz system openeuler-24.03-default_20250507_amd64.tar.xz system openeuler-25.03-default_20250507_amd64.tar.xz system opensuse-15.6-default_20240910_amd64.tar.xz system rockylinux-9-default_20240912_amd64.tar.xz system ubuntu-20.04-standard_20.04-1_amd64.tar.gz system ubuntu-22.04-standard_22.04-1_amd64.tar.zst system ubuntu-24.04-standard_24.04-2_amd64.tar.zst system ubuntu-24.10-standard_24.10-1_amd64.tar.zst system ubuntu-25.04-standard_25.04-1.1_amd64.tar.zst ``` And I can see it on my cache using `pveam list cache | grep debian-12-standard_12.7-1`: ``` cache:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst 120.65MB ``` All other `*arr` LXCs I've tried worked, this is the only one that I am having an issue with.
Author
Owner

@Ninja-FSE commented on GitHub (Jun 1, 2025):

sorry for bumping you thread, but i have the same issue but with alpine OS script.

@Ninja-FSE commented on GitHub (Jun 1, 2025): sorry for bumping you thread, but i have the same issue but with alpine OS script.
Author
Owner

@MickLesk commented on GitHub (Jun 1, 2025):

And as Default Install?

@MickLesk commented on GitHub (Jun 1, 2025): And as Default Install?
Author
Owner

@Ninja-FSE commented on GitHub (Jun 1, 2025):

yes

@Ninja-FSE commented on GitHub (Jun 1, 2025): yes
Author
Owner

@MickLesk commented on GitHub (Jun 1, 2025):

i mean the OP.

there are 3 Variants:

pveam available doesnt work
zfs not work
network issue github

we cannot do more as check this 3 things

@MickLesk commented on GitHub (Jun 1, 2025): i mean the OP. there are 3 Variants: pveam available doesnt work zfs not work network issue github we cannot do more as check this 3 things
Author
Owner

@gabrielrinaldi commented on GitHub (Jun 2, 2025):

@MickLesk thanks, that did point me in the right direction. Default settings did work, so I started eliminating things, seems like the issue was the root password that started with a -.

@gabrielrinaldi commented on GitHub (Jun 2, 2025): @MickLesk thanks, that did point me in the right direction. Default settings did work, so I started eliminating things, seems like the issue was the `root` password that started with a `-`.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/ProxmoxVE#1049