Deploying Proxmox VE 8.4 on KVM

admineci

admineci

Auteur

952 mots
Deploying Proxmox VE 8.4 on KVM

Learn how to build a fully featured Proxmox VE 8.4 node inside a KVM lab, using Open vSwitch for VLAN-aware bridges and virt-install for an unattended deployment. This article walks through network design, ISO installation, first-boot configuration and post-install hardening—ideal for architects validating Proxmox before production roll-out on bare metal.

Introduction

Proxmox Virtual Environment (VE) 8.4 represents a significant milestone in open-source virtualization, combining the robustness of KVM and LXC technologies with an intuitive web-based management interface. Released in April 2025, this version introduces groundbreaking features like live migration with NVIDIA vGPUs, virtiofs directory passthrough, and an API for third-party backup solutions.

For enterprise architects and DevOps teams, deploying Proxmox VE within a KVM environment offers a powerful sandbox for testing advanced configurations, validating clustering strategies, and prototyping infrastructure changes before production deployment. This guide demonstrates how EC INTELLIGENCE successfully implements nested Proxmox environments for client proof-of-concepts, utilizing Open vSwitch for sophisticated VLAN-aware networking.

Why Virtualize Proxmox?

Running Proxmox VE inside KVM provides several strategic advantages:

  • Risk-free testing: Validate clustering, storage configurations, and API integrations without affecting production
  • Rapid prototyping: Snapshot and rollback capabilities enable quick iteration
  • Training environments: Create isolated labs for team education
  • Migration planning: Test upgrade paths and migration strategies safely
  • Development sandboxes: Perfect for CI/CD pipeline integration

Prerequisites and System Requirements

Host System Requirements

  • CPU: Intel VT-x or AMD-V with nested virtualization support
  • RAM: Minimum 16 GB (8 GB for host + 8 GB for Proxmox VM)
  • Storage: 200 GB free space for comfortable testing
  • Network: Gigabit Ethernet for optimal performance
  • OS: Ubuntu 22.04 LTS or newer (tested configuration)

Software Requirements

  • KVM/QEMU with libvirt
  • Open vSwitch 2.17.0 or newer
  • virt-install utility
  • Proxmox VE 8.4 ISO

 

1. Preparing the KVM Host

Enable Nested Virtualization

First, verify and enable nested virtualization on your host:

# For Intel processors
cat /sys/module/kvm_intel/parameters/nested
# If output is N, enable it:
echo "options kvm-intel nested=Y" | sudo tee /etc/modprobe.d/kvm-intel.conf
# For AMD processors
cat /sys/module/kvm_amd/parameters/nested
# If output is 0, enable it:
echo "options kvm-amd nested=1" | sudo tee /etc/modprobe.d/kvm-amd.conf
# Reload kernel modules
sudo modprobe -r kvm_intel kvm  # or kvm_amd for AMD
sudo modprobe kvm_intel kvm     # or kvm_amd for AMD

Download Proxmox VE 8.4 ISO

# Create directory for ISOs
mkdir -p ~/iso
cd ~/iso
# Download latest Proxmox VE 8.4
wget https://enterprise.proxmox.com/iso/proxmox-ve_8.4-1.iso
# Verify checksum (optional but recommended)
wget https://enterprise.proxmox.com/iso/proxmox-ve_8.4-1.iso.sha256
sha256sum -c proxmox-ve_8.4-1.iso.sha256

Install Required Packages

 

# Update system
sudo apt update && sudo apt upgrade -y
# Install virtualization stack
sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients \
    bridge-utils virt-manager virtinst openvswitch-switch
# Add user to libvirt group
sudo usermod -aG libvirt $USER
newgrp libvirt

 

2. Network Architecture with Open vSwitch

EC INTELLIGENCE's recommended network topology segregates traffic across three logical networks:

Network Purpose Bridge Name VLAN IP Range Description
Management management N/A 10.10.0.0/16  
Access LAN s1accessnet YES 10.10.100.0/24 VM guest traffic
Cluster s1clusternet YES 10.10.200.0/24 Corosync, replication

 

Create Open vSwitch Bridges

 

# Start Open vSwitch
sudo systemctl enable --now openvswitch-switch
# Create VLAN-aware bridges
sudo ovs-vsctl add-br s1accessnet
sudo ovs-vsctl add-br s1clusternet
# Configure bridges for VLAN trunking
sudo ovs-vsctl set bridge s1accessnet vlan-mode=trunk
sudo ovs-vsctl set bridge s1clusternet vlan-mode=trunk

 

Define libvirt Networks

Create XML definitions for each network:

management.xml (NAT network for management):

<network>
  <name>management</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <ip address='10.10.0.1' netmask='255.255.0.0'>
    <dhcp>
      <range start='10.10.0.100' end='10.10.0.254'/>
    </dhcp>
  </ip>
</network>

 

s1accessnet-network.xml (OVS bridge for VM traffic):

<network>
  <name>s1accessnet</name>
  <forward mode='bridge'/>
  <bridge name='s1accessnet'/>
  <virtualport type='openvswitch'/>
  <portgroup name='vlan-all' default='yes'>
    <vlan trunk='yes'>
      <tag id='10'/>
      <tag id='20'/>
      <tag id='100'/>
      <tag id='200'/>
    </vlan>
  </portgroup>
</network>

s1clusternet-network.xml (OVS bridge for cluster traffic):

<network>
  <name>s1clusternet</name>
  <forward mode='bridge'/>
  <bridge name='s1clusternet'/>
  <virtualport type='openvswitch'/>
</network>

Activate the networks:

# Define networks
sudo virsh net-define management.xml
sudo virsh net-define s1accessnet-network.xml
sudo virsh net-define s1clusternet-network.xml
# Start and enable autostart
for net in management s1accessnet s1clusternet; do
    sudo virsh net-start $net
    sudo virsh net-autostart $net
done

 

3. Deploy Proxmox VE with virt-install

Automated Installation

 

virt-install --name s1proxmox01 --ram 8192 \
--disk path=/var/lib/libvirt/images/s1proxmox01.img,size=150 \
--vcpus 8  \
--network network:management \
--network bridge=s1accessnet,mac=52:54:00:20:56:3c,virtualport_type=openvswitch,model=virtio,driver.name=vhost \
--network bridge=s1clusternet,mac=52:54:00:3e:78:dc,virtualport_type=openvswitch,model=virtio,driver.name=vhost \
--console pty,target_type=serial \
--cdrom /home/iso/proxmox-ve_8.4-1.iso \
--graphics vnc,listen=0.0.0.0,port=60200,keymap=fr

 

Connect to Installation

You can use VNCviewer

 

4. Proxmox VE Installation Configuration

During the graphical installer, configure:

 

Setting Recommended Value Notes
Keyboard Layout Your locale (e.g., French) Auto-detected
Target Disk /dev/sda (150 GB) First disk for OS
Filesystem ZFS (RAID0) Best for single disk
Country/Timezone Your location For repository selection
Root Password Strong passphrase Minimum 8 characters
Email admin@domain.com For notifications
Management Interface ens3 (first NIC) Connected to management network
Hostname s1proxmox01.lab.local FQDN format
IP Address 10.10.44.10/24 Static IP recommended
Gateway 10.10.0.1 libvirt NAT gateway
DNS Server 10.10.0.1 Can use 8.8.8.8 alternatively

 

5. Post-Installation Configuration

Initial System Updates

# SSH into Proxmox
ssh root@10.10.44.10
# Update repositories for non-subscription use
cat > /etc/apt/sources.list.d/pve-no-subscription.list << EOF
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
EOF
# Comment out enterprise repository
sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/pve-enterprise.list
# Update system
apt update && apt dist-upgrade -y

Configure Additional Storage

# Initialize second disk for VM storage
parted /dev/sdb mklabel gpt
parted /dev/sdb mkpart primary 1MiB 100%
# Create LVM thin pool
pvcreate /dev/sdb1
vgcreate vmdata /dev/sdb1
lvcreate -L 180G -T vmdata/thin_pool
# Add to Proxmox storage
pvesm add lvmthin vm-storage \
  --vgname vmdata \
  --thinpool thin_pool \
  --content rootdir,images

 

Network Configuration for VLANs

Configure VLAN interfaces via web UI or CLI:

# Example: Create VLAN interface
cat >> /etc/network/interfaces << EOF
auto ens4.100
iface ens4.100 inet static
    address 10.10.100.10/24
    vlan-raw-device ens4
auto ens5.200
iface ens5.200 inet static
    address 10.10.200.10/24
    vlan-raw-device ens5
EOF
# Restart networking
systemctl restart networking

 

Partager cet article

Twitter LinkedIn

Vous avez un projet similaire ?

Nos experts sont là pour vous accompagner dans vos projets cloud et infrastructure.

Articles similaires

Nathan

Assistant virtuel ECINTELLIGENCE