
Run OpenWRT in Proxmox
You want to virtualize OpenWRT in Proxmox to run a compact router VM that handles LAN and WAN traffic without extra hardware, and I’ll show a clean, repeatable path to get you there.
I’ll walk you through picking the correct x86_64 image, why you must gunzip before resizing or importing, and the practical difference between UEFI (OVMF) and SeaBIOS boots.
We avoid the common pitfall of importing a compressed .gz that looks fine but breaks UEFI boot. On your proxmox host you’ll use wget, gzip -d, qemu-img resize, then qm importdisk — a small sequence that saves a lot of time.
Along the way you’ll map Proxmox bridges to openwrt eth0 (LAN) and eth1 (WAN), enable LuCI after first boot, and learn console tips so you can copy link commands or copy configs without pain. Thanks to these steps, your router VM will behave predictably and you’ll know when PCIe passthrough is truly needed.
Key Takeaways
- Choose the correct x86_64 image (UEFI vs SeaBIOS) before you start.
- Always gunzip the image on the proxmox host before resize or import.
- Use qm importdisk then attach Unused Disk 0 and set boot order.
- Map bridges for LAN and WAN so the router VM routes traffic.
- Enable LuCI and confirm time and package manager access on first boot.
What you’ll need on your Proxmox host before you begin
I recommend one quick checklist: CPU architecture, firmware mode, storage backend, and download link. I’ll keep this short so you can verify each item and move on.
Supported hardware, firmware choices, and storage
Use Proxmox VE on an x86-64 machine. For future PCIe passthrough flexibility choose OVMF (UEFI); pick SeaBIOS if you want a simple BIOS path.
Enable AMD‑V/IOMMU or Intel VT‑d in BIOS and add kernel params on the proxmox host (example for AMD: amd_iommu=on iommu=pt). Seeing “AMD IOMMUv2 functionality not available” is usually a warning, not a blocker.
Storage: local-lvm (LVM‑thin) is simple and fast for router disks; ZFS works too but has different performance tradeoffs.
Downloads — grab the correct image
- Go to the stable release page → targets/x86/64.
- UEFI: use generic-ext4-combined-efi.img.gz; BIOS: generic-ext4-combined.img.gz.
- Right-click to copy the direct link, wget on the host, then gunzip before any resize.
Option | When to pick | Notes |
---|---|---|
OVMF (UEFI) | Need PCIe passthrough later | Enable IOMMU; more flexible for pcie |
SeaBIOS | Simple BIOS boot | Fewer firmware steps; good for quick tests |
LVM‑thin | Default router disk | Easy import to local-lvm |
ZFS | When you need snapshots | Mind pool and virtio-scsi tradeoffs |
Optional notes — if a forum post mentions ryuheechul commented or a specific link author, treat that as context but always copy the official release link for downloads.
Step-by-step: prepare and boot the UEFI image
This section gives a compact, practical sequence for preparing the UEFI image, creating the VM, and confirming a clean first boot.
Grab the exact UEFI image via the direct link and decompress it on the host:
- wget the file: https://downloads.openwrt.org/releases/23.05.3/targets/x86/64/openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img.gz
- Then decompress: gzip -d openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img.gz
- Resize to your target capacity, e.g., qemu-img resize -f raw openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img 2G
Create a new VM with OVMF firmware. Do not add any drives or ISO media. If Proxmox auto-creates an EFI disk, delete it — this keeps the import clean. Do not start the VM.
Import the prepared raw image into your storage and attach it as the VM disk:
- Run: qm importdisk 101 openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img local-lvm
- Go to VM > Hardware, select Unused Disk 0, click Edit, then Add to attach it as a disk
Step | Command or UI Action | Why it matters |
---|---|---|
Download & decompress | wget …generic-ext4-combined-efi.img.gz & gzip -d | Prevent importing a .gz that won’t support UEFI boot |
Resize | qemu-img resize -f raw … 2G | Give the disk enough space for packages and logs |
Import disk | qm importdisk 101 … local-lvm | Places the disk into storage so the VM can use it |
Attach unused disk | VM > Hardware → Add Unused Disk 0 | Makes the imported disk visible to the VM for disk boot |
Start the VM. On first boot log in via console, verify time is correct (TLS and package fetch depend on it), then enable the web interface (LuCI) so you can manage via LAN. Make sure your VM has two bridged NICs—one for WAN and one for LAN—so eth1 is uplink and eth0 is local.
Alternative path: SeaBIOS install using the generic-ext4-combined image
If you prefer the classic BIOS path, the SeaBIOS route is simple and reliable for a lean router VM.
Download and prepare the non‑EFI image
Grab the non‑efi file: generic-ext4-combined.img.gz via the direct link and run wget on the host. Then gunzip the archive before any resize.
Resize the raw file to 8G with:
qemu-img resize -f raw openwrt-*.img 8G. This gives the disk room for packages and logs.
VM creation and disk tweaks
Create a new VM using SeaBIOS—no ISO and no drives. Import the prepared image with:
qm importdisk VMID openwrt-*.img STORAGEID.
In VM > Hardware edit the Unused Disk 0, set bus to VirtIO, and enable Discard plus IO Thread. Then add the disk.
Boot order and final checks
Go to VM > Options > Boot Order and set virtio0 as the top — uncheck all others. This avoids a disk boot failure.
- Keep the same WAN/LAN bridge mapping so interfaces remain eth1 (WAN) and eth0 (LAN).
- Start the VM, open console, check time, and enable the web interface to finish setup.
Networking devices, boot order, and host integration that just work
Getting WAN and LAN right on the host keeps your router predictable and easy to troubleshoot.
I map physical NICs to vmbr bridges on the proxmox host. Example entries in /etc/network/interfaces are vmbr0 (management), vmbr1 (WAN → enp1s0f0), and vmbr2 (LAN → enp1s0f1).
Ports, interfaces, and DHCP vs static
Inside openwrt set eth0 as LAN with a static IP (192.168.1.1/24). Set eth1 as WAN using DHCP. If you run a private DNS, add peerdns 0 on WAN and specify your servers.
Boot order and console management
If the VM stalls or tries PXE, edit Options → Boot Order and place the imported disk first. Uncheck ‘net’ so it won’t PXE.
Add a serial port and use the xterm.js console for reliable copy/paste. If a command needs to be moved to another tab window, copy it and paste into the console — then reload refresh session if the web UI lags.
- Map NICs to vmbr for clear WAN vs LAN separation.
- Confirm eth0/eth1 by link status — devices can swap names on some hardware.
- Ensure the proxmox host provides DNS so opkg can fetch packages; then verify with ping and the web UI on the LAN IP.
Task | Action | Why |
---|---|---|
Bridge mapping | vmbr1=WAN, vmbr2=LAN | Separates uplink and internal traffic |
Boot order | Disk first, uncheck net | Avoids PXE and boot failures |
Console | Add serial + xterm.js | Easy copy, paste, and recover |
Performance notes, PCIe passthrough realities, and next steps to keep routing
Measure first — then change. Before you chase pcie passthrough, verify whether virtio NICs meet your needs for throughput and latency on the router VM.
Many AMD hosts show an IOMMU warning like “AMD IOMMUv2 functionality not available.” That can block clean passthrough, so check dmesg and device grouping first.
Real tests show virtio-scsi writes near 1.0 GB/s versus ~1.4 GB/s on native ZFS. For typical WAN routing and small-file workloads, the difference rarely matters.
Keep snapshots and backups, place VM disks on LVM‑thin or ZFS per your recovery plan, and size the VM—4 vCPUs, host CPU type, modest RAM—based on services you enable.
Next steps: enable SQM for WAN, add VLANs on LAN, keep packages current, and monitor interfaces and boot order so the router stays reliable over time.