How to install ESXi nested inside ProxMox VE
So, one of the tasks I had to complete this past week was: Install an ESXi hypervisor in a virtual machine, on top of a Proxmox VE 4 infrastructure. This post documents the process (since I wasn’t able to find a complete guide, I wrote one : ).
Why ESXi inside of Proxmox?
Test? Laboratory? Proof of Concept? Well the reasons behind this setup may vary, but if you’re going to do this, avoid it in production. From my tests the performance is comparable, but still you will get a lot of headaches and lose many of the benefits a vSphere environment has to offer. But first things first:
Requirements
The first thing to look at is requirements, you will need:
- A Proxmox node capable of Hardware Assisted Virtualization. (Intel: VT-x ; AMD: AMD-V)
- QEMU >= 2.3.0
- Kernel >= 3.19.0-21
- At least 4GB of RAM for Proof of Concept; 16GB of RAM to add vCenter Server Appliance and another ESXi host.
- Patience and this guide : )
During this process I used an Intel CPU, Proxmox VE 4.2, pve-qemu-kvm 2.4_14, kernel 4.4.6-1-pve and ESXi 6.0.
Although you shouldn’t encounter any problem with a slightly different environment or a diffrent version of ESXi, it is always best to get the latest software.
Enable nested KVM on host
The first thing you need to do is SSH into the node you plan to install ESXi on. Once in determine the processor:
model name : Intel(R) Xeon(R) CPU E3-1245 v5 @ 3.50GHz
In this case I had an Intel CPU, if you have AMD it is not a problem, follow the appropriate tab:
Using your favourite editor add the following content to the file:
options kvm ignore_msrs=y options kvm-intel nested=Y ept=Y
Then reload the modules:
# modprobe -r kvm-intel kvm; modprobe kvm kvm-intel
If this doesn’t work for you, you will have to restart the node.
Using your favourite editor add the following content to the file:
options kvm ignore_msrs=y options kvm-amd nested=Y ept=Y
Then reload the modules:
# modprobe -r kvm-amd kvm; modprobe kvm kvm-amd
If this doesn’t work for you, you will have to restart the node.
Creating the Virtual Machine
This step is quite straightforward: you just need to create the new virtual machine, but you need to pay attention to three options:
- OS Type must be “Other OS Types“.
- CPU Type must be “host“.
- Network Type must be “VMWare vmxnet3“.
Now, take note of the machine id that you used during the installation. SSH in the node hosting the newly created virtual machine and edit the file named: /etc/pve/qemu-server/YOURVMID.conf (of course replace YOURVMID with the ID of the virtual machine you created). And add at the end of the file:
args: -machine vmport=off
In this way you should be able to complete the ESXi installation without problems. However, when you will try starting your first virtual machine, you will notice that you can’t start virtual machines inside of the ESXi host.
Enabling Nested Virtualization inside ESXi
You will notice that you won’t be able to start virtual machines inside of your new ESXi, that is because you would have to add: vmx.allowNested = “TRUE” to each and every virtual machine inside of that host. That doesn’t sit well with me. The first thing you need to do is to enable SSH on the ESXi host.
Enabling SSH isn’t a big deal. First access its console, then press F2 and enter your password. You will be facing the menu, from there select Troubleshooting Options and press enter. Then just enable SSH.
Now you have to SSH into the ESXi host and edit (nano and vi are available) the file /etc/vmware/config and append the following vmx.allowNested = “TRUE”. Reboot the host and voilà! You should be able to spawn machines correctly now.
Conclusion
It might not be the best for performance, but this setup is pretty good for proof of concept and laboratories if you already have an existing environment. I was even able to install vCenter Server Appliance though with a bit of pain.Thanks to Jitze Couperus for the amazing image on the top.
Thanks to: Matt’s blog and The Perils and Triumphs of Being a Geek blog, from which I gained many information that are part of this guide.
- 2020 A year in review for Marksei.com - 30 December 2020
- Red Hat pulls the kill switch on CentOS - 16 December 2020
- OpenZFS 2.0 released: unified ZFS for Linux and BSD - 9 December 2020
Thank you for this write up. I am migrating my lab from ESXi to Proxmox but I still want to be able to run a vSphere environment for a couple of things. This is perfect. This also solved a problem I have had where a certain VM REQUIRES a floppy drive which can’t be added via Proxmx but is supported by kvm. Finding the args: directive allowed me to attach a floppy via the native kvm arguments. I hadn’t found it documented anywhere else.
I was able to Storage vMotion my VMs to an NFS export on the Proxmox server and attach my vmdk files to my VMs. This worked for all of my VMs except for vCenter. My vCenter appliance has 10 drives. I can start my VM with virtio scisi adapter but vCenter appliance doesn’t seem to have the correct drivers. If I use a default scsi adapter then it errors out on having more than 8 devices. Can you share some pointers on how I might be able to get vCenter running?
Hello Jason, I’m glad that you’ve found my post useful : ) Unfortunately, I haven’t tested vCenter installation with your configuration (+8 drives), therefore I don’t have any direct answer. I think that you will probably have no luck with VirtIO drivers. What I suggest you to do is to condensate three or more drives into one drive, that would by far be the easiest path. Let me know if you encounter other errors : )
Hi! Before finding your web site I was trying to install esxi on proxmox without luck because it did not found any drive. Now after following your suggestions… it is the same… can you help me? Thanks
Hello Mario,
disk problems are usually associated with drivers/ESXi version, can you tell me what version of ESXi you are using? And also, as a quick tip, try changing the virtual bus interface from Proxmox (when you create a disk there is a Bus directive) and try setting it to SCSI or SATA (VirtIO usually creates some problems).
If it doesn’t work and you need further help, don’t hesitate to contact me : )
Hai..
thanks for share this article
iam star following your article but i have some error messages.
TASK ERROR: KVM virtualisation configured, but not available. Either disable in VM configuration or enable in BIOS.
can you help. thanks
Hello ardi, where are you getting this error from? I suspect this is due to the fact that you don’t have KVM enabled in your BIOS/UEFI on the host machine.
Hai mark, thanks for replaying my question..
i already sucsess install Vsphere, after reboot my proxmox..
Thanks Anyway..
Thanks for this. I just setup Proxmox recently, and really enjoying it. Can you share how you installed vCenter on it?
Cheers!
Hello Zubin, I am sorry but I don’t have access to this setup anymore and can’t currently reproduce one. I can tell you that installing vCenter was the same as installing a VCA on a baremetal cluster, except for the fact that you aren’t.
Hello Zubin, I had no problems at all to install a VCenter on Proxmox, because as of today, we need a Windows Server (simply a KVM machine) with Vcenter installed on it. Hope that helps.
Hello everyones,
My ESXI’s installed properly on PVE but I’m currently having network problems.
I’ve got a IP subnet that I’m using for all my VMs and the gateway IP is placed on a different subnet.
The problem is I can’t ping my gateway IP from the ESXI machines. I’m using this gateway for all my VMs and never encountered any problems with it. So this is weird… Any ideas?