virtualize OpenWRT in Proxmox
Home Lab Enhancements
William Patterson  

Run OpenWRT in Proxmox

You want to virtualize OpenWRT in Proxmox to run a compact router VM that handles LAN and WAN traffic without extra hardware, and I’ll show a clean, repeatable path to get you there.

I’ll walk you through picking the correct x86_64 image, why you must gunzip before resizing or importing, and the practical difference between UEFI (OVMF) and SeaBIOS boots.

We avoid the common pitfall of importing a compressed .gz that looks fine but breaks UEFI boot. On your proxmox host you’ll use wget, gzip -d, qemu-img resize, then qm importdisk — a small sequence that saves a lot of time.

Along the way you’ll map Proxmox bridges to openwrt eth0 (LAN) and eth1 (WAN), enable LuCI after first boot, and learn console tips so you can copy link commands or copy configs without pain. Thanks to these steps, your router VM will behave predictably and you’ll know when PCIe passthrough is truly needed.

Table of Contents

Key Takeaways

  • Choose the correct x86_64 image (UEFI vs SeaBIOS) before you start.
  • Always gunzip the image on the proxmox host before resize or import.
  • Use qm importdisk then attach Unused Disk 0 and set boot order.
  • Map bridges for LAN and WAN so the router VM routes traffic.
  • Enable LuCI and confirm time and package manager access on first boot.

What you’ll need on your Proxmox host before you begin

I recommend one quick checklist: CPU architecture, firmware mode, storage backend, and download link. I’ll keep this short so you can verify each item and move on.

Supported hardware, firmware choices, and storage

Use Proxmox VE on an x86-64 machine. For future PCIe passthrough flexibility choose OVMF (UEFI); pick SeaBIOS if you want a simple BIOS path.

Enable AMD‑V/IOMMU or Intel VT‑d in BIOS and add kernel params on the proxmox host (example for AMD: amd_iommu=on iommu=pt). Seeing “AMD IOMMUv2 functionality not available” is usually a warning, not a blocker.

Storage: local-lvm (LVM‑thin) is simple and fast for router disks; ZFS works too but has different performance tradeoffs.

Downloads — grab the correct image

  • Go to the stable release page → targets/x86/64.
  • UEFI: use generic-ext4-combined-efi.img.gz; BIOS: generic-ext4-combined.img.gz.
  • Right-click to copy the direct link, wget on the host, then gunzip before any resize.
OptionWhen to pickNotes
OVMF (UEFI)Need PCIe passthrough laterEnable IOMMU; more flexible for pcie
SeaBIOSSimple BIOS bootFewer firmware steps; good for quick tests
LVM‑thinDefault router diskEasy import to local-lvm
ZFSWhen you need snapshotsMind pool and virtio-scsi tradeoffs

Optional notes — if a forum post mentions ryuheechul commented or a specific link author, treat that as context but always copy the official release link for downloads.

Step-by-step: prepare and boot the UEFI image

This section gives a compact, practical sequence for preparing the UEFI image, creating the VM, and confirming a clean first boot.

Grab the exact UEFI image via the direct link and decompress it on the host:

  • wget the file: https://downloads.openwrt.org/releases/23.05.3/targets/x86/64/openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img.gz
  • Then decompress: gzip -d openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img.gz
  • Resize to your target capacity, e.g., qemu-img resize -f raw openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img 2G

A dimly lit modern workstation with a laptop displaying the OpenWRT interface. The laptop's screen shows a network diagram and a terminal window with command prompts. In the foreground, a USB drive and a SATA hard disk drive sit atop the desk, representing the virtualized storage for the OpenWRT system. Soft lighting casts shadows, creating a focused atmosphere. The scene conveys the technical nature of the OpenWRT virtualization process within a Proxmox environment, ready to be showcased in the article's "Step-by-step: virtualize OpenWRT in Proxmox with UEFI (OVMF)" section.

Create a new VM with OVMF firmware. Do not add any drives or ISO media. If Proxmox auto-creates an EFI disk, delete it — this keeps the import clean. Do not start the VM.

Import the prepared raw image into your storage and attach it as the VM disk:

  • Run: qm importdisk 101 openwrt-23.05.3-x86-64-generic-ext4-combined-efi.img local-lvm
  • Go to VM > Hardware, select Unused Disk 0, click Edit, then Add to attach it as a disk
StepCommand or UI ActionWhy it matters
Download & decompresswget …generic-ext4-combined-efi.img.gz & gzip -dPrevent importing a .gz that won’t support UEFI boot
Resizeqemu-img resize -f raw … 2GGive the disk enough space for packages and logs
Import diskqm importdisk 101 … local-lvmPlaces the disk into storage so the VM can use it
Attach unused diskVM > Hardware → Add Unused Disk 0Makes the imported disk visible to the VM for disk boot

Start the VM. On first boot log in via console, verify time is correct (TLS and package fetch depend on it), then enable the web interface (LuCI) so you can manage via LAN. Make sure your VM has two bridged NICs—one for WAN and one for LAN—so eth1 is uplink and eth0 is local.

Alternative path: SeaBIOS install using the generic-ext4-combined image

If you prefer the classic BIOS path, the SeaBIOS route is simple and reliable for a lean router VM.

Download and prepare the non‑EFI image

Grab the non‑efi file: generic-ext4-combined.img.gz via the direct link and run wget on the host. Then gunzip the archive before any resize.

Resize the raw file to 8G with:
qemu-img resize -f raw openwrt-*.img 8G. This gives the disk room for packages and logs.

VM creation and disk tweaks

Create a new VM using SeaBIOS—no ISO and no drives. Import the prepared image with:
qm importdisk VMID openwrt-*.img STORAGEID.

In VM > Hardware edit the Unused Disk 0, set bus to VirtIO, and enable Discard plus IO Thread. Then add the disk.

Boot order and final checks

Go to VM > Options > Boot Order and set virtio0 as the top — uncheck all others. This avoids a disk boot failure.

  • Keep the same WAN/LAN bridge mapping so interfaces remain eth1 (WAN) and eth0 (LAN).
  • Start the VM, open console, check time, and enable the web interface to finish setup.

Networking devices, boot order, and host integration that just work

Getting WAN and LAN right on the host keeps your router predictable and easy to troubleshoot.

I map physical NICs to vmbr bridges on the proxmox host. Example entries in /etc/network/interfaces are vmbr0 (management), vmbr1 (WAN → enp1s0f0), and vmbr2 (LAN → enp1s0f1).

Ports, interfaces, and DHCP vs static

Inside openwrt set eth0 as LAN with a static IP (192.168.1.1/24). Set eth1 as WAN using DHCP. If you run a private DNS, add peerdns 0 on WAN and specify your servers.

Boot order and console management

If the VM stalls or tries PXE, edit Options → Boot Order and place the imported disk first. Uncheck ‘net’ so it won’t PXE.

Add a serial port and use the xterm.js console for reliable copy/paste. If a command needs to be moved to another tab window, copy it and paste into the console — then reload refresh session if the web UI lags.

  • Map NICs to vmbr for clear WAN vs LAN separation.
  • Confirm eth0/eth1 by link status — devices can swap names on some hardware.
  • Ensure the proxmox host provides DNS so opkg can fetch packages; then verify with ping and the web UI on the LAN IP.
TaskActionWhy
Bridge mappingvmbr1=WAN, vmbr2=LANSeparates uplink and internal traffic
Boot orderDisk first, uncheck netAvoids PXE and boot failures
ConsoleAdd serial + xterm.jsEasy copy, paste, and recover

Performance notes, PCIe passthrough realities, and next steps to keep routing

Measure first — then change. Before you chase pcie passthrough, verify whether virtio NICs meet your needs for throughput and latency on the router VM.

Many AMD hosts show an IOMMU warning like “AMD IOMMUv2 functionality not available.” That can block clean passthrough, so check dmesg and device grouping first.

Real tests show virtio-scsi writes near 1.0 GB/s versus ~1.4 GB/s on native ZFS. For typical WAN routing and small-file workloads, the difference rarely matters.

Keep snapshots and backups, place VM disks on LVM‑thin or ZFS per your recovery plan, and size the VM—4 vCPUs, host CPU type, modest RAM—based on services you enable.

Next steps: enable SQM for WAN, add VLANs on LAN, keep packages current, and monitor interfaces and boot order so the router stays reliable over time.

FAQ

What do I need on my Proxmox host before I begin?

You’ll need a modern x86_64 host with virtualization enabled in BIOS/UEFI, sufficient disk and RAM for the VM, and network bridges configured (vmbr) for WAN and LAN mapping. Install qemu-server tools and ensure the Proxmox host can resolve DNS so downloads and opkg work. If you plan PCIe passthrough, confirm IOMMU and the necessary vfio modules are enabled on the host.

Which firmware should I choose — OVMF (UEFI) or SeaBIOS?

Choose OVMF (UEFI) if you prefer the generic-ext4-combined-efi image and want UEFI features and secure boot flexibility. Choose SeaBIOS when using the non-efi generic-ext4-combined image or if you need legacy BIOS compatibility. OVMF often simplifies disk handling for modern images; SeaBIOS can be simpler for older setups.

Where do I find the right x86_64 OpenWRT image and how do I copy the link?

Download from the official OpenWrt downloads page or GitHub releases — pick the x86_64 generic-ext4-combined(.img or .img.gz) variant. Copy the raw download URL from your browser or Git clone page, or use wget/curl on the Proxmox host. You can also clone repositories locally with GitHub Desktop if you prefer a GUI, but the direct image URL is all you need to fetch the disk file.

How do I prepare and resize the generic-ext4-combined-efi image?

If the image is gzipped, gunzip it on the host (do not use text editors). Use qemu-img to convert or resize the resulting img file: qemu-img resize image.img +SIZE. Confirm partitions inside with fdisk or parted and expand the filesystem after boot if needed. Always keep a copy before resizing.

What’s the correct way to create the VM for UEFI (OVMF)?

Create the VM with no drive attached initially, select OVMF firmware, set CPU and memory, and don’t start it. This prevents the VM from booting an empty disk. After creation, import the prepared disk image using qm importdisk into the VM’s storage and attach it as a VirtIO or scsi device depending on your performance needs.

How do I import the disk image and attach it to the VM?

Use qm importdisk to import the file into Proxmox storage. Then edit the VM hardware and add the imported disk (usually appears as Unused Disk) to virtio0 or scsi0. Configure cache and discard options for best performance and reliability.

What boot order settings avoid disk boot problems?

In Options > Boot Order set virtio0 (or the disk you attached) as the first device so the VM boots from that disk. For SeaBIOS installs with VirtIO disks, ensure virtio0 is first to prevent the VM from trying other devices or an absent ISO.

How do I boot and verify the first login and basic settings?

Start the VM, open the serial or noVNC console, and log in using the default root account. Check and set the system time, enable the LuCI web interface if desired, and configure a password. Confirm network interfaces show up (eth0/lan, eth1/wan) and that WAN can reach the internet for package updates.

What differs for a SeaBIOS install using the generic-ext4-combined image?

For SeaBIOS, use the non-efi generic-ext4-combined image, unpack it, and qemu-img resize if needed. Create the VM with SeaBIOS firmware, use a VirtIO disk with Discard and IOThread enabled for performance, and confirm the boot order points to the virtio disk so the VM boots the installed system.

How should I map physical NIC ports to VM bridges for WAN and LAN?

Create Linux bridges (vmbr) on the Proxmox host and bind each physical NIC to the appropriate bridge. In the VM hardware, add network devices that attach to those vmbr interfaces. Map eth0 to your LAN bridge and eth1 to the WAN bridge — this keeps traffic separation simple and mirrors a physical router layout.

Should I use DHCP or static IPs on OpenWrt interfaces?

Use DHCP on the WAN interface for automatic upstream addressing in most environments. Use a static IP on the LAN interface for predictable management and DHCP server control. Configure peerdns on the WAN if you want to use upstream DNS, or set custom DNS servers on the LAN for clients.

What tips help with console access, copy/paste, and session refresh?

Use the Proxmox web console (noVNC or xterm.js) for quick access — right-click and paste may vary by browser, so use keyboard shortcuts or the Proxmox clipboard tool. If the console stalls, reload or open another tab/window and reconnect; sometimes a page refresh restores the session. For long commands, keep a local copy to re-send if the session drops.

How do I ensure opkg and package bootstrap work inside the VM?

Ensure the VM has DNS and outbound network access. On the host, confirm DNS resolution works and that the VM’s gateway routes traffic correctly. Update package lists with opkg update and set the correct /etc/resolv.conf if necessary. If mirrors are blocked, try alternative OpenWrt mirror URLs or download packages manually.

What are performance considerations and PCIe passthrough realities?

For throughput, use VirtIO drivers, enable cache=writeback where appropriate, and assign multiple vCPUs if needed. PCIe passthrough (VFIO) can provide near-native performance but requires host support, reserved IOMMU groups, and careful device assignment; the host may lose access to passed devices. Test pass-through on noncritical hardware first.

How do I handle unused disks and multiple storage devices for routing?

Keep unused disks detached until needed — mark them as Unused Disk in the VM hardware. For storage-heavy setups, attach additional disks as separate virtio devices and set a clear boot order. Use individual disks for logging, caching, or specialized packages to isolate I/O and simplify backups.

Can I use GitHub to save configs or download images — do I need to sign in?

You can copy image URLs or clone repositories from GitHub without signing in for public content. Signing in or using GitHub Desktop helps when you want to star, fork, or push changes. For downloading official images, direct HTTP/HTTPS links are usually sufficient and can be fetched by wget or curl on the Proxmox host.

Any quick troubleshooting steps if the VM won’t boot from the disk?

Check boot order first — ensure the correct disk is first. Verify the disk was imported and attached to the VM (not left as Unused Disk). Confirm firmware choice matches the image type (OVMF for efi images, SeaBIOS for legacy images). If the VM still fails, attach the disk to another VM or mount the image on the host to inspect partitions and filesystems.