From 197249a17f054671434734e2899bd6c8f4f0d37c Mon Sep 17 00:00:00 2001 From: clyhtsuriva Date: Sat, 1 Feb 2025 02:08:17 +0100 Subject: Add pages for ansible and proxmox ansible: testing playbooks + managing firewall rules with ufw proxmox: disk performance w/ io_thread and scsi single + useful helper scripts found online --- ansible/managing-ufw-rules.md | 13 +++++++++++++ ansible/testing-playbooks.md | 9 +++++++++ proxmox/disk-performance.md | 31 +++++++++++++++++++++++++++++++ proxmox/helper-scripts.md | 23 +++++++++++++++++++++++ 4 files changed, 76 insertions(+) create mode 100644 ansible/managing-ufw-rules.md create mode 100644 ansible/testing-playbooks.md create mode 100644 proxmox/disk-performance.md create mode 100644 proxmox/helper-scripts.md diff --git a/ansible/managing-ufw-rules.md b/ansible/managing-ufw-rules.md new file mode 100644 index 0000000..c48b56b --- /dev/null +++ b/ansible/managing-ufw-rules.md @@ -0,0 +1,13 @@ +# UFW rules management using Ansible + +Since ufw is part of the community.general collection, ensure it's installed on the Ansible control machine: + +```sh +ansible-galaxy collection install community.general +``` + +It is by default installed and enabled with ALLOW rules for SSH, HTTP and HTTPS on all images generated for this homelab. + +The tasks are defined in `ansible/roles/common/tasks/ufw.yml`. + +It is called by `ansible/playbooks/common.yml`. diff --git a/ansible/testing-playbooks.md b/ansible/testing-playbooks.md new file mode 100644 index 0000000..b07f5d5 --- /dev/null +++ b/ansible/testing-playbooks.md @@ -0,0 +1,9 @@ +# Testing playbooks + +Using `-C` to test whatever playbooks on chosen hosts. + +e.g. : + +```sh +ansible-playbook -C playbooks/common.yml -i hosts +``` diff --git a/proxmox/disk-performance.md b/proxmox/disk-performance.md new file mode 100644 index 0000000..21d3a53 --- /dev/null +++ b/proxmox/disk-performance.md @@ -0,0 +1,31 @@ +# Disk performance + +Taken from https://forum.proxmox.com/threads/virtio-scsi-vs-virtio-scsi-single.28426/ + +> VirtIO SCSI vs VirtIO SCSI Single boils down to a simple architectural choice that has real performance implications: +> +> Standard VirtIO SCSI uses one controller that handles up to 16 disks, while Single dedicates one controller per disk. This matters most when using IOThreads (iothread=1), because threads work at the controller level. +> +> When using IOThreads, Single shows significantly better performance (often 30-50% improvement) because each disk gets its own dedicated processing thread. Without IOThreads, the performance difference is minimal (typically less than 5%). +> +> So the choice is straightforward: +> +> > Want maximum disk performance? Use Single + iothread=1 +> > Managing lots of disks with limited resources? Standard might be better to avoid thread overhead +> > From the VM's perspective, they work exactly the same +> +> This explains why benchmarks consistently show better I/O performance with Single + iothread=1, while keeping the underlying architectural differences clear. + +Which results in the following packer configuration : + +```hcl + ... + # VM Hard Disk Settings + scsi_controller = "virtio-scsi-single" + + disks { + ... + io_thread = true + } + ... +``` diff --git a/proxmox/helper-scripts.md b/proxmox/helper-scripts.md new file mode 100644 index 0000000..b3b557c --- /dev/null +++ b/proxmox/helper-scripts.md @@ -0,0 +1,23 @@ +# Helper scripts for Proxmox + +(Proxmox Clean Orphaned LVM)[https://community-scripts.github.io/ProxmoxVE/scripts?id=clean-orphaned-lvm] + +> This script helps Proxmox users identify and remove orphaned LVM volumes that are no longer associated with any VM or LXC container. It scans all LVM volumes, detects unused ones, and provides an interactive prompt to delete them safely. System-critical volumes like root, swap, and data are excluded to prevent accidental deletion. + +(Source Code)[https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/clean-orphaned-lvm.sh] + +As a privileged user : +```sh +bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/clean-orphaned-lvm.sh)" +``` + +(Proxmox VE Post Install)[https://community-scripts.github.io/ProxmoxVE/scripts?id=post-pve-install] + +> This script provides options for managing Proxmox VE repositories, including disabling the Enterprise Repo, adding or correcting PVE sources, enabling the No-Subscription Repo, adding the test Repo, disabling the subscription nag, updating Proxmox VE, and rebooting the system. + +(Source Code)[https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/post-pve-install.sh] + +As a privileged user : +```sh +bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/misc/post-pve-install.sh)" +``` -- cgit v1.2.3