aboutsummaryrefslogtreecommitdiff
path: root/proxmox
diff options
context:
space:
mode:
Diffstat (limited to 'proxmox')
-rw-r--r--proxmox/disk-performance.md31
-rw-r--r--proxmox/helper-scripts.md23
2 files changed, 54 insertions, 0 deletions
diff --git a/proxmox/disk-performance.md b/proxmox/disk-performance.md
new file mode 100644
index 0000000..21d3a53
--- /dev/null
+++ b/proxmox/disk-performance.md
@@ -0,0 +1,31 @@
+# Disk performance
+
+Taken from https://forum.proxmox.com/threads/virtio-scsi-vs-virtio-scsi-single.28426/
+
+> VirtIO SCSI vs VirtIO SCSI Single boils down to a simple architectural choice that has real performance implications:
+>
+> Standard VirtIO SCSI uses one controller that handles up to 16 disks, while Single dedicates one controller per disk. This matters most when using IOThreads (iothread=1), because threads work at the controller level.
+>
+> When using IOThreads, Single shows significantly better performance (often 30-50% improvement) because each disk gets its own dedicated processing thread. Without IOThreads, the performance difference is minimal (typically less than 5%).
+>
+> So the choice is straightforward:
+>
+> > Want maximum disk performance? Use Single + iothread=1
+> > Managing lots of disks with limited resources? Standard might be better to avoid thread overhead
+> > From the VM's perspective, they work exactly the same
+>
+> This explains why benchmarks consistently show better I/O performance with Single + iothread=1, while keeping the underlying architectural differences clear.
+
+Which results in the following packer configuration :
+
+```hcl
+ ...
+ # VM Hard Disk Settings
+ scsi_controller = "virtio-scsi-single"
+
+ disks {
+ ...
+ io_thread = true
+ }
+ ...
+```
diff --git a/proxmox/helper-scripts.md b/proxmox/helper-scripts.md
new file mode 100644
index 0000000..b3b557c
--- /dev/null
+++ b/proxmox/helper-scripts.md
@@ -0,0 +1,23 @@
+# Helper scripts for Proxmox
+
+(Proxmox Clean Orphaned LVM)[https://community-scripts.github.io/ProxmoxVE/scripts?id=clean-orphaned-lvm]
+
+> This script helps Proxmox users identify and remove orphaned LVM volumes that are no longer associated with any VM or LXC container. It scans all LVM volumes, detects unused ones, and provides an interactive prompt to delete them safely. System-critical volumes like root, swap, and data are excluded to prevent accidental deletion.
+
+(Source Code)[https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/clean-orphaned-lvm.sh]
+
+As a privileged user :
+```sh
+bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/clean-orphaned-lvm.sh)"
+```
+
+(Proxmox VE Post Install)[https://community-scripts.github.io/ProxmoxVE/scripts?id=post-pve-install]
+
+> This script provides options for managing Proxmox VE repositories, including disabling the Enterprise Repo, adding or correcting PVE sources, enabling the No-Subscription Repo, adding the test Repo, disabling the subscription nag, updating Proxmox VE, and rebooting the system.
+
+(Source Code)[https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/post-pve-install.sh]
+
+As a privileged user :
+```sh
+bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/post-pve-install.sh)"
+```