Dual boot with proxmox or not


I have purchased a new small formfactor PC to run as a homelab but also as a small gaming PC for entertainment.
I’m in dubio how I’m going to do the things I want to do.
A. I want to play some simple games on it, the machine is more than capable of running the games
B. this machine should also be my development rig to develop software and spin up VM’s for Kubernetes testing etc…

One idea tells me I better do a dual boot situation so I can use the full benefit from Windows and the integrated graphics. Partition the nvme so I can have 1TB for Windows and 1TB for proxmox.

The other idea tells me, just keep it simple and install proxmox only and spin up a VM with Windows and play the games directly from the VM. But I hear and read so many problems with this that I’m holding back of this idea.
But on the other hand, I really like this idea more since I can keep the machine running in Proxmox and have both Windows running and the other VM’s at the same time for development, kubernetes and not having to reboot all the time.

Its a minisforum Elite HX90 barebone with an AMD Ryzen™ 9 5900HX
I have put 64GB RAM + 2TB Samsung 980 pro nvme drive in it (7000MBs/5000Mbs speed)
It has integrated Radeon 8 graphics on the board

So the machine has it’s power, no doubt about that, but I’m unsure about the dual boot yes or no and if no, what kind of rabbithole I’m going in with Windows.
I don’t mind tinkering a little bit with things and I have pretty good experience with Ubuntu.
But proxmox is a first time now.

Anybody who can share some personal experience with similar setup/concept?
Why did you go for dual boot and how does it go? Why did you go proxmox and windows VM and how does it game?


Welcome to the forum!

This is just my opinion, but dual boot sucks a**. It’s always terrible to reboot when you want to play, then when taking a break from gaming, reboot so you can access you browser with your logged in accounts and saved tabs. I used to do it a long time ago, until I went full time to Linux about 3 or 4 years ago. It was always a pain, so that made me sure that I stayed most of the time booted in Windows.

What I like to tell people is that, if running a VM and doing PCI-E passthrough of a GPU is not an option, then the next best thing is to own 2 computers. The linux box doesn’t have to be a beast, it can just be an old laptop (thinkpads and dell latitudes play nice with linux). Something like a 3rd gen (Ivy Bridge) dual-core i5 and 8 GB of DDR3 RAM and integrated graphics runs Fedora really smoothly (at least that was my case with KDE Plasma and then with sway). Actually, I’d recommend not getting a laptop with dGPU if you plan to run linux on it. Those things can be as cheap as $90, not going over $170. If you upgrade a laptop from 8 GB to 16GB, it can become quite a decent dev machine and can run a bunch of linux containers under LXD or OCI containers (docker / podman). If your IDE and program takes too much memory, I guess you’d need 32 GB. I’ve seen some scenarios where 16 GB can run out very fast if you don’t use an external build server.

And speaking of build server, instead of a laptop, you can just buy an old desktop that is going to be way more powerful and run it headless with proxmox on it. LXC is still what I’d recommend though, because it saves you a lot of CPU and can share the RAM. But if your workflow involves OCI containers, I guess a VM (or 5 in a cluster) would suffice. But the downside of that is that your dev workstation will be running Windows, while the build server and dev environment for OCI containers will be separate.

Unfortunately, I don’t think the Minisforum can do PCI-E passthrough. If you had a desktop motherboard, you’d probably have a better chance, but even these have issues with IOMMU groups. You could try taking a peak at the arch wiki for the lspci script that shows if you can pass the GPU (but I kinda doubt it). I heard some people did PCI-E passthrough with integrated graphics from Ryzen 5700G, but I cannot vouch for that, I know that before Zen APUs, GPU passthrough was impossible with integrated graphics. Maybe you could get to passthrough an m.2 port if you’re lucky and be able to use that with a dGPU, but that is going to be a PITA anyway.

In any case, I would suggest you use this for gaming and only run Windows and if there is no alternative, run something like Fedora in a VM and use that for OCI container and maybe running gitlab-runners or having a CI/CD env with Jenkins. Well, I guess you could run Debian or Ubuntu if you’re experienced with that, but I discovered Fedora works better for developers, IMO.

If you didn’t have 64GB of RAM already, I would have suggested you to do a 2nd build which you could slap linux on and use that as your daily machine and dev workstation and oci container needs.

Lastly, being a Ryzen and AMD GPU build, you could try Fedora or Pop!_OS (or latest Ubuntu interim release, what’s it at now, 21.10 I think). I would suggest Pop! if you are going for gaming and development, I think they just recently released their new version of Pop!_OS based on Ubuntu 21.10. Many games are available for proton and you could try to get your hands dirty with WINE if no Lutris script is available. If it works for you, maybe you’ll be able to ditch Windows entirely.

If you really need VMs for some reason, you can just install virt-manager, but I would recommend just installing LXD and running OCI containers in linux containers. That way, you can also use some LXC for other services which you might find useful, like Jenkins or maybe a GitLab / Gitea server, if you don’t want to use Github.

Hello @Biky

Thanks for your feedback.
I’m also not the biggest fan of dual for the exact same reason. I have had dual boot many years ago with WinXP and Ubuntu back then. It works but it’s just a hassle to always reboot.
And back those days, gaming on Linux was basically impossible. At least not how it is today with Steam and all the plethora of tools that work for Linux these days.

That’s a bit of a bummer as I was really hoping I could use that machine for IOMMU, I didn’t read much upfront about limitations of the board/device and that topic, so that’s going to chargeback on me now hahaha.

I already have the full machine here with 64G RAM and I can’t return it anymore. I had ordered all those parts seperately due to shortages everywhere and received all the bits and pieces over the past 3 months. So the returning time already passed due long time ago.

Then I think I better keep the device as a gaming device only with Windows 11.

Now I still need a development machine but if you say it can be a lowend machine, could it also be a raspberry pi?
Because I still have a CM4 with 8GB RAM and a nice carrier board from Oratek (TOFU) and it also has an nvme connected. I don’t need insane large clusters or anything, I just want to simulate a 3 to 5 node cluster running k3s (edge) to see how everything works, playing wiht CI/CD, Argocd, learn to deploy Rancher, learn to deploy applications like Wordpress on Kubernetes etc… stuff like this.
There is no intention for running anything in production. It’s just a development rig.

Hence my initial idea to buy 1 nice device and use it for dual purpose.


1 Like

Yeah, IOMMU groups are essential for GPU passthrough. But you can still live boot into any linux distro and give it a spin, see if it has well split IOMMU groups. Might be worth a shot. But again, chances are very slim with custom motherboards and even more so when mobile chips are involved.

Still, you got a pretty good beast if you run W11 and a Linux VM on HyperV (or VirtualBox). I wouldn’t recommend WSL, tried it, it’s not really there yet.

I’m using a RPi 4 with 8 GB of RAM and a USB enclosure with a NVME SSD where I got my root partition. The Pi is pretty snappy, but I would not recommend trying to do that kind of stuff on it, unless you have a ton of patience. The Pi can run quite a few LXD containers and probably you can run k3s on 5 LXC on it, it should work for that purpose. But if you plan to run build automation tools like Jenkins… well, it will run, but compile times will not be great. Due to LibreOffice not being in the main repo of my distro, I compile LO manually and it takes 2 to 3 days to finish. This is just to give you a perspective on build times. LibreOffice is a bit suite, probably smaller programs like suckless tools would compile faster. Still, the Pi can’t really handle that, especially if you throw a lot of compile tasks at it. And I have a thick radiator case on it and it is OC’ed to 2GHz, so it paints an even worse image for the Pi.

A cheap 2nd hand laptop or PC can do a lot more than the Pi. Given, it is going to use more power, but should still be fine.

If I were you and in this situation, I would have ran Linux on the Minisforum nonetheless, see how games perform (or if they even work) and if it wasn’t an option, slapped Windows on it and have a Linux VM (or multiple VMs) and deploy a virtualized k3s and whatever else. The Ryzen 5900HX is more than capable for that.

I have Windows 11 now on the machine, installing steam now and its taking a while…these games these days are all 10+GB each lol.
But my god, this runs extremely fast. It responds instantly, never had experienced such performance in Windows so that is definetely a very good start.
I also connected my quest2 just to check if the GPU is supported and it does, so PCVR is finally now also possible.

I’m pretty much famliar wiht WSL2 on my Lenovo laptop. I use it a lot for development with nodejs and nextjs and it works pretty good actually.
I also tried spinning up k3s cluster with k3d and rancher on this laptop some time ago, but that didn’t work out for me. I think indeed WSL2 is not the perfect suite for that.

I haven’t tried that Linux VM before. Is that something new available in Windows?
Because that might solve my problem also.
Any link or documentation to that so I can read a bit on that one.
I just need simply 3 to 5 VM’s running Ubuntu and that’s it. Just for playing around with k3s and Rancher.
I don’t like virtualbox to be honest. That’s why I went for WSL2 on my laptop as it integrates in my Win10 so I can easy move between both without the hassle of running something in a seperate VM.
My Ryzen 5900HX with 64GB RAM is extremely overkill for this, but I did it on purpose to max out the RAM just to be sure I can throw anything to it and should handle it fine.

I also found a project for proxmox on RPI by the way, but im not sure how good that performance will be.
Its called Pimox

I can give that a try in the weekend as it looks very good actually, maybe just for some single things to run at home.

1 Like

Nothing new. I don’t have any specific documentation. If you have Windows 11 Pro, you can just enable HyperV in “Turn Windows features on or off” (or CTRL+R → OptionalFeatures.exe). If you are running W11 Home, use Oracle Virtualbox. I can give you some hints on how to setup a bridged network and basically make the VMs you create talk to each other, so you can run K3s on multiple VMs. But I’d suggest doing one giant VM and just running LXD on it and use k3s inside linux containers, that would save a lot of memory and especially CPU (not having to emulate multiple virtual hardware). I guess you could run proxmox in a VM just for LXC if you want an easy GUI, but I’d still suggest a GUI-less distro and running LXD instead, saves some resources.

And remember to enable AMD VT-d in the UEFI.

WSL2 works for some things, but it’s nowhere near as polished as LXD / LXC is. As for virtualization, if it’s available, I always recommend Hyper-V over other hypervisor software on Windows.

Hah! Didn’t know about this. Again, if you want to run a K3s environment on a Pi, you can do it in LXD, or if you will try Pimox, in LXC (proxmox just uses “classic” lxc-* commands, LXD is more advanced IMO - especially hate the fact that Proxmox cannot run Void LXC, on LXD the images are available straight from Canonical / linuxcontainers . org, which is a massive plus for me).

LXD / LXC should work flawlessly on the Pi for a k3s cluster with likely more than just 5 nodes. You could probably make 3 master nodes and 5 worker nodes inside LXC and still have resources to spare for your software running on your workers in OCI containers (OCI is just the official generic name for containers like Docker, which includes podman and the containers running under k8s, k3s, k0s, microk8s et al.). So you can do whatever deploy scripts on your windows workstation, then use Rancher on the Pi to deploy them.

I’m not sure if I’m joining the conversation too late, but one idea I had that worked well for me at one point, was to turn my gaming PC into a console. I get all my work done on my Linux PCs, but I do have a Windows PC that’s hooked up to my TV, I put it in a small case that can support a GPU, and I made Steam start up in big picture mode with a Steam controller attached. You barely get to see the Windows interface, and it’s dedicated for gaming. Ultimately, I’m more of a console gamer anyway, but I do power the gaming PC on every now and then.

As was already mentioned here, I’m not totally a fan of dual booting. It’s okay, but I don’t like context-switching. I often leave my Linux PC’s logged in for a very long time, with all my projects open. It’s a pain to shut it down, boot into another OS, then reboot and open up all the same apps and set up my workflow again. Not everyone uses a workflow-centric style though.

Another thing I did in the past, that I should probably dust off and see if it still works today, is I set up a gaming container via Docker. This container included the Linux version of Steam, as well as the Windows version of Steam via Crossover. This literally worked out so well, I could run Skyrim on integrated graphics through the container. (This was pre-special edition though). I loaded the container up with all my Windows and Linux Steam games, and it was great. The difficult part was on my nvidia PC, mapping the GPU of the PC to the internals of the container via symlinks. Major pain in the behind, but once I got it working, it was great. However, that was quite a long time ago though. It may not work as well now as it did then, so I mention it here just to stir curiosity and see if anyone wants to try it. The beauty of this though, was that I had it all automated via a Dockerfile so if anything happened to it, I wouldn’t have to do all the work manually ever again. I could recreate the container in minutes and then all I’d have to do is redownload my games.


That would be cool to try; but I pretty much decided that just having a separate Windows PC for gaming was easiest, so when I wanted to relax and play, I could do it w/o fussing about. :smiley_cat: :truck: :small_airplane:

1 Like

I also prefer having separate boxes and treating a windows PC like a console. But the only PC I currently own, besides my work laptop, is a Raspberry Pi (that I’m using to write this message on and do my daily web browsing and email reading and sometimes editing a spreadsheet).

I might do a container passthrough, but instead of OCI container, I’ll most likely use LXC, because I prefer having a full Linux environment. But I still tend towards having a dedicated PC for that. Sometimes…

1 Like