Installing XenDesktop is a pretty frustrating experience. I believe that’s why Citrix came out with VDI-in-a-Box to help those interested in a POC build a quick environment. I haven’t looked at the VDI in a Box solution as I’m more interested in mimicking a production environment. I posted an earlier teaser in which I proposed using a provisioning server opposed to a Desktop Studio provisioned VM on a hypervisor. This gives you the advantage of being able to create this lab running VMWare Workstation on a CPU that doesn’t support nested VM’s inside of a guest vSphere instance.
I approached this lab in a couple of different ways. I wanted to have what I believed is a best of breed deployment where I’d have XenDesktop as the broker and vSphere as the hypervisor. I was hoping to get away with building just one beefy Windows server to support this environment. I was rudely reminded that you still need a minimum of 3 Windows servers to support this architecture. Because XenDesktop requires a domain controller and neither vCenter nor XenDesktop Studio can be installed on a domain controller. Why not just install vCenter and XenDesktop Studio on the same virtual machine? Installing XenDesktop can be a chore in itself and trying to change around default ports that in conflict with vCenter is not something I wanted tackle as part of this lab. I also discovered reduces this lab by one Windows server didn’t have a large impact of performance.
The big challenge with this approach was finding the right balance of resources dedicated to the infrastructure but leaves me enough resources to actually launch a VDI session and use it. What’s the point of creating a lab that can’t be used? My lab machine is a Dell XPS 15 laptop with 8GB of RAM and an i7 processor. You can get more details about my lab set up here.
So, I ended up with a total of 4 Infrastructure machines.
- Domain Controller (1GB, 1 CPU)
- vCenter (1.5 GB, 2 CPU’s)
- XenDesktop Studio (1GB, 2 CPU’s)
- ESXi (2GB, 2 CPU’s)
- Nested Windows XP VDI hosts (1GB 2 CPU)
I assigned a NAT’d IP to all of the infrastructure machines included the production VM NIC for my lone ESXi server. This allows the guest running on my ESXi host to communicate with all the infrastructure machines and my XPS. My XPS served as my VDI client. The below is the network picture for this lab.
I don’t think I have to tell you that this lab bogged down my pretty beefy laptop. This is a situation where the general rule of virtualization resources come into play – “Buy as much RAM as you can afford and then ask for more money for more RAM”. My CPU utilization even with the nested VDI host running didn’t really take a big hit. Memory was just about maxed out when I had all of my infrastructure components running.
The bottom line is that this lab failed. Too many VM’s for my configuration. I’m fairly certain this lab would have worked if I had 16GB of RAM but 8GB is not enough. My next step is to try and run this lab on a ESXi host with 8GB of RAM. The two GB freed up from the overhead of the Windows 7 host may be enough. I also have the option of using the ESXi host as my VDI hypervisor.
Better luck next time.