Not sure what you mean by virtualization strategies. I mean, it depends on the admins to decide how that goes. In big enough infrastructures, one may not even be running VMs, but run on bare metal, due to the need of a lot of performance dedicated to a single application (like database servers).
There are a lot of things to consider, so I can’t really say what is “the best” because there is no one-size-fits all solution.
But, I’ll take the second half of your comment as a starting point.
Again, depends on the production. I’ll be talking the “what” first.
If you are a small company creating and hosting an application like a an inventory system for your own store, then you would likely do it with 1 small host running a VM or 2 on local storage. If you get bigger and having the application down would mean you lose money (people don’t do anything while waiting for it to come back up), then you would likely do at least 3 hypervisors and a NAS or SAN and you would set the VM in a HA mode. If one host dies, the other takes the VMs and continues running it as if nothing happened, nobody will notice anything (except the admins who see that a host is down).
You get even bigger, the platform now needs to run in parallel and load balance, so you move your stack to kubernetes and keep the db separate. You buy 2 physical servers for the DB and run them on a local, very fast storage. DBs are load balanced. Then you get 12 other servers which you use to deploy your software on. 3 servers would be lower powered, used as the control plane, the rest are worker nodes. You now have a big infrastructure to manage just for the application itself. You need to have other servers who take care of monitoring, managing, deploying and others, plus backup servers.
Those were just examples, now let’s talk theory a bit. VMs still have the advantage of live migration and high availability, being able to move from a hypervisor to another without impacting the application or the OS running on them. Despite OCI and linux containers resource efficiency, VMs still hold a stronghold in this department, and that is assuming you want to run your software on Linux, if you need another OS, like Windows or BSDs, you need VMs.
LXC is not that used industry wide, or is just a stepping stone to the final goal. Docker’s popularity came from the idea that you can have the same software be packaged the same for everyone, so that dev, uat, prod or other people’s infrastructures would all be running the code the exact same way. But Docker and OCI containers in general became so popular that big players started adopting them and thus they evolved to become self-manageable for the most part.
But one thing that OCI containers and LXC can hold against VMs is that, if your software can be parallelized and doesn’t get affected by one instance dying, you can be more efficient with your resources. Instead of doing a HA environment with a VM basically “running” on 2 physical servers, you just create 2 separate containers on 2 different servers and load balance them. For example, web servers are a good example of things that can be load balanced, which is why nginx containers are so popular and why there are so many custom nginx images.
Now let’s talk the “how.” Classically, you would have admins create new VMs, deploying the software and launching it. When this is a rare job to do, that’s fine, but when you have to deploy and delete a lot of new instances depending on traffic, then OCI container orchestration comes into play. You could in theory do it with LXC and even VMs, but OCI containers have the advantage that they are not running a full OS inside them, so in theory, they should be more resource efficient, which add up when you run thousands of them.
As mentioned by KI7MT, there are tools like Terraform or MaaS (metal-as-a-service) that are used to provision servers and containers, then tools like Juju and Ansible (notice the or and and) used to manage the application automatically. I’m not familiar with the high level stuff and massive infrastructures, so I can’t speak for those scenarios, but you can use any tool you have at your disposal to provision and configure VMs and containers.
I am still “”“stuck”“” in the era of manual provisioning and configuring, because I don’t manage large infrastructures. I have automated some tasks with shell scripting for configuration, after manual provisioning, like automatic adding to the monitoring server (Zabbix autodiscovery), but haven’t used “big boy toys” yet and it doesn’t look like I’ll have the chance any time soon.
Actually, this comment became a rambly mess, I’m not sure I even want to post it, but I spent too much time writing it, so whatever.