ESXi on Dell 6410

Just a quick post to confirm that I was successfully able to install and run ESXi 4.1 on my a Dell 6410 with an i5 processors and 4GB of RAM.  Not much to talk about other than it works.  The install took all of 15 minutes and no special adjustments.  ESXi seems to generally like Dell PC class hardware.  So far I’ve been able to run ESXi on my Dell XPS 420 and now a 6410.  I actually use my XPS 420 as a production host on my home network.  It runs a Windows Home Server, a Windows 7 image from an older machine I had and an Windows 7 machine I use for jump box into my network.

Windows 8 Developer Preview on HP TouchSmart Laptop

Windows 8 on HP TouchSmart

I’ve been toying with the idea of installing Windows 8 Developers Preview on my HP TouchSmart 1025dx Laptop.  In theory this should be a decent preview of Windows 8 Tablets to come.  The TouchSmart laptop is a Multitouch and Pen based Windows 7 ready laptop and should make for an interesting experience.

Install

The target system has an AMD Turion X2 64-bit CPU, 4GB of RAM and a 320GB HD.  It previously ran a 64-bit copy of Windows 7 Professional.  I wanted to install the 64 bit version of Windows 8.  I’ve installed Windows 8 in a VM and it took about 45 minutes to install.  I wanted to see if there was an upgrade option so, I launched the install from within Windows 7.

There was no option to upgrade from my current OS to the developer preview.  I had to perform a clean install and as a result lost all my data. It took approximately 75 minutes from the launch of the setup to the completion of the install.

Experience

The experience was overall pretty lacking.  Windows 8 is currently only in a developers preview.  Windows 8 itself is an exciting new approach to desktop computing but a little raw.  The larger problem is that the hardware and software weren’t strongly integrated at this case.  None of the hardware shortcut buttons worked.  Also the automatic screen re-orientation when rotating didn’t work. All of the native Window 8 applications launched.  I did have some issues with IE.  Once I launched it no key stroke/touch combination would allow me to exit.  I had to place the machine in sleep and wake it up to switch between IE and the new Windows 8 start screen.  I had other experiences similar to this such as artifacts within IE 8 when scrolling the webpage.

These are all OEM software and driver related issues.  The 1025 is a couple of years old so as expected all the basic system drivers were there for things like wireless and video.  Multitouch did work and I was able to switch from Touch and Pen based input at will.  I was also able to swipe the screen which gave me a preview of what the full experience would be on hardware designed for Windows 8.  I look forward to the full experience.

My XPS 15 VMware Workstation 8 Lab

I originally intended to use my old Dell XPS 420 as a lab machine.  I installed ESXi 4.1 on it and started to load it with VM’s and it turned into my production home server.  The VM’s became too valuable to shutdown and free resources for my home lab.  And to boot it can only be expanded to 6GB of RAM.  This for an ESXi host is extremely light.

So, I decided to put my shiny new Dell XPS 15 to work using VMware Workstation 7.0.  I’ve since migrated to 8.0 and couldn’t be happier (well I could use more RAM).  I’m able to run 3 ESXi hosts, vCenter and an Openfiler SAN.

I’ve been extremely impressed with the labs I’ve been able to run with this setup.  I’ve been able to run nested VM’s within the ESXi hosts with the storage located on the virtual Openfiler SAN.  This has allowed me to do advanced scenarios including vMotion and FT and HA labs.  For the HA labs I’ve had to scale the ESXi hosts down to 2 hosts to allow 4GB of RAM for each ESXi hosts.  I’ve found you need a minimum of 4GB of RAM to do HA.

This is actually my only complaint about the lab within my production laptop.  16GB of RAM would allow me to do a great deal more within VMware workstation. Oversubscription obviously works very well within VMware workstation 8.  I have a total of 2TB of disk space thin provisioned on a laptop with a 750 GB HD and only use 100 GB total space which includes my production data.  The VM’s total 8.5 GB of RAM not including the overhead of my host OS.  And, I’ve assigned a total of 9 vCPU’s.  My total utilization rarely goes over 20% even with some of the advanced labs.

Just like production networks my bottleneck is almost always memory.  Disk I/O can be an issue at times but not as much as memory.  This is why I’m still leaning toward purchasing a desktop with a minimum of 16GB of RAM.  I’m debating because my laptop’s performance has pleasantly surprised me.  I’ve taken to using it as my primary machine and cannot complain about performance for everyday computing and it almost meets all my needs for a Virtualization lab.

Hold both a CCIE and VCP

A while back I brought the VMWare vSphere training videos from TrainSignal.  I was surprised to see the instructor had both his CCIE and VCP.

These two certifications are probably the most highly sought certifications in the IT industry.  I remember the horror stories of people trying to write the CCIE and their many failed attempts.  I even had delusions grandeur considering going for the certification myself.  I soon discovered I didn’t have the love for networking needed to commit to the certification.  It may no longer be the guaranteed meal ticket it once was but it’s still a highly sought certification.

Over the past few years, I’ve noticed a huge uptick in the number of job postings looking for a VCP.  The VCP is a difficult certification to achieve. The candidate has to take an official VMWare course which could be a minimum of a $2500 investment.   As a result many self-taught people (such as me) are filtered out from being able to sit for the exam.

I did a quick search on Theladders.com for the keywords “CCIE” and “VCP”.  I found it interesting and not at all surprising that the hiring companies for both certifications were primarily IT service providers, or telco’s in the case of the CCIE.  However, the CCIE still carries a bit of weight in the enterprise.  I saw several job posts for fortune 500/non-IT companies such as GMC and financial institutions looking for candidates with the CCIE.  It still may be a couple of years before the same can be said of the VMware certifications

I don’t know how practical it is to hold both certifications.  I believe virtualization has grown into its own category/discipline within the IT industry.  VMware even offers a CCIE like certification in the VCDX.  There are obviously some synergies between the disciplines and advantages to being certified in both.  I’ve studied and obtained Cisco certifications in the past and it takes a great deal of regular hands on experience to maintain the CCNA and CCNP let alone the CCIE and VCP.

I’m of the opinion that a combination of VCP, CCNA/CCNP and a storage certification would be more valuable and maintainable for an infrastructure engineer/architect than the combination of the CCIE and VCP.  It’s my experience that from a practical knowledge perspective an infrastructure architect doesn’t need to be an expert in all three areas (Virtualization, Disk, and Network) but rather an expert in one area and strong in the other two.  It will be a rare and undesirable situation where one person would be called upon to be the SME for all three disciplines.

This topic has made me look at my bookshelf and think about dusting off my CCNP study guides.  I’m glad that the taught has passed.

Bring your own device is a fail? (or is VDI a failed approach?)

Read a post on ZDNET about how virtualization is the answer to Bring Your Own Device (BYOD).  That got me to thinking – Are the technologies that are being billed as enablers to BYOD actually the technologies that employees want? On VirtualizedGeek we’ve talked about many of these technologies – Application Virtualization, VDI and of course this is a virtualization blog primarily.  But are these technologies actually the enabler to BYOD that many organizations are aiming?

These technologies are deployed to keep corporate data within the boundaries of the enterprise.  They all help to isolate corporate data from the end user’s consumer environment and enable the organization to deliver existing applications without making major upgrades beyond the pace the organization is comfortable.

One advantage is that if a user wants to bring an iPad to work they can.  The disadvantage – they bring an iPad to work and get a Windows interface.  If this user wanted a Windows interface they would buy Windows devices.  And the Windows interface doesn’t work well on an iPad.  It’s not just an issue of iPad vs. Windows it’s also an issue of Windows XP vs. Windows 7.  Users invest in technologies so that they can have the latest and greatest experience.  A friend of mine was  frustrated because she brought her MacBook Pro to work and couldn’t use any of its native interface and features.  She experienced many of the same interface issues that made her buy a MacBook in the first place.  This is why consumer by Windows 7 PC’s.

This is one of the main challenges with Windows XP.  There has been an explosion of advanced applications and services tied to and introduced over the long life of XP.  This has made migrating an entire enterprise to Windows 7 extremely difficult (good reason for VDI).  In the meantime, the consumer market has marched along with Vista, Windows 7, iOS, Mac OS 10.7 and 14 new versions of Google Chrome.  There’s a huge gap between where the end user would like their experience to be and what BYOD technologies deliver today.

Users ultimately want to be productive in the manner in which they are best productive.  What’s the answer?  One solution is the “Cloud”.  Organizations need to start building their applications and data stores without a specific endpoint in mind.  This will be a difficult shift.  Office 365 and Google Apps are good examples of how software and service providers are starting to offer alternates to legacy applications that are platform independent and are viable options for BYOD.  But there is still a lot of work that needs to be done.  I believe Google has a good notion of where this needs to go with their Chrome Book platform but the execution and ecosystem is not where it needs to be at this point.  Microsoft has taken a much more enterprise friendly approach with 365.  It’s a familiar interface with features geared more toward BYOD than the previous mentioned technologies.

If you are thinking of doing a BYOD program or have implemented one what technologies have you considered and how successful has the program been from a popularity and support perspective?

Running Nested Windows 8 inside Hyper-V in VMware Workstation 8

With all the exciting news about vSphere 5.0 and the Windows 8 beta being released over the past couple of week’s one product announcement may have gone unnoticed.  VMware Workstation 8 was released.  For us virtualization geeks this is a big deal.

Related post – Physical vs. Virtual lab 

One of the nice surprises with the release of Workstation 8.0 is the addition of Virtualization of Intel VT-x and AMD-V/RVI.  What does this mean?  Well now you can run a 64-bit nested VM within a virtual instance of vSphere and you can run other bare metal hypervisors such as Hyper-V

I upgraded the hardware of an existing Windows 2008 Server R2 VM I had from Workstation 7 to hardware version 8 and enabled Virtualize Intel VT-x/EPT or AMD-V/RVI and was able to successfully add the Hyper-V role to this server.  To change the hardware version you can right click on the VM and select “Manage” and then “Change Hardware Compatibility”

The big test was to try and run a nested VM within the Hyper-V host.  I was disappointed that it just didn’t work.  Two virtual drivers needed to run Hyper-V services would not start.  Virtual Machine Bus and Virtual Infrastructure Driver both indicated a problem in the device manager of the Hyper-V host.  This made me look to “The Google” where I found an extremely helpful blog post here

Of all the advice on the page the one thing that made the two drivers work were adding the following line to the configuration file for the Hyper-V VM.

hypervisor.cpuid.v0 = “FALSE”

Once I added this switch the error code for the two drivers disappeared.  I was then able to boot my nested virtual machine.  I actually got extremely ambitious and decided to install 64-bit Windows 8 as the nested VM.

I had given the Hyper-V host a lot of resources – 4GB RAM and 2 CPU’s.  The nested Windows 8 machine had 1 GB of RAM and 1 CPU.  It took about 1 hour to install the Windows 8 VM and it was one of the slowest nested machines I’ve ran to date but it worked.  This is an incredible leap in desktop virtualization technology and will make future labs extremely flexible.  To get an idea of what typical nested performance is like take a look at my post on vSphere performance inside of Workstation 7.  Time to buy more RAM 🙂

Visit my YouTube Channel for more Virtualization and Tech Talk

What Makes a Virtualized Environment a Cloud?

It’s all about the Cloud Manager.

According to NIST the characteristics of a cloud include Elastic, Broadband Access, Measured Service (Pay as you go), Self Provisioning and Pooled Service.  All of the cloud modes can meet the 5 characteristics.  But what makes a virtualized environment a “Cloud”.

This is a fairly common question.  After all you could slap together a hypervisor based infrastructure and have 3 to 4 of the characteristics of a cloud and be able to offer Infrastructure as a Service.  However to meet NIST’s and what I think what has become the common definition of a cloud you need all 5 attributes.

This is where the Cloud Manager comes into play.  All of the major Hypervisor providers will give you the tools you need to provide an Elastic, Pooled Service available via Broadband.  It’s the Measured Service and Self Provisioning that introduces the challenge.

Most people think of the cloud manager as the interface to the cloud provider’s service portal.  The cloud manager is actually the orchestration layer that ties the entire infrastructure together to enable your cloud offering.

The Cloud Manager enables the accounting for measuring service as well as the ability to orchestrate the provisioning of services once a user requests resources.  The orchestration of the provisioning attribute is an extremely complicated workflow to automate.

Think about everything that needs to happen in an infrastructure as a service.  A user requests a VM with 80GB of disk space, 2GB of RAM, 2 CPU Cores and a public IP.  All of these resources need to be provisioned in multiple systems and the accounting needs to be tracked throughout the life of the VM.  The Cloud Manager needs to communicate with the Hypervisor, Storage and Network.  In addition, if the environment is built on VMWare the orchestration layer needs to be aware of DRS and vMotion.

Now let’s complicate it a bit more.  Let’s say you want to offer varying levels of service.  You might decide to offer fast storage vs. slow, highly redundant vs. economical compute.  This means the orchestration between the cloud manager and infrastructure needs to me even more capable. In IaaS environments selecting a product to fulfill all of these needs can be difficult.  If you add the requirement that your solution be open to multiple Hypervisor platforms it further limits the list of available products.  This is one area where an all VMWare solution may not be an option as vCloud Director has its short comings in addition to being dependent of all VMWare compute layer.

Software as a Service and Platform as a Service can be even more complex depending on how the backend of the solution is provisioned.  I have yet to see a mature off the shelf solution that addresses SaaS and PaaS.

So it’s all about the Cloud Manager.  When investigating a private, community of public cloud don’t underestimate the importance of the “user interface”.  It’s actually the heart of your project and the area you’ll most likely spend the most time.

XenClient Install and First Look

A while back I wrote how I thought Windows 8 should be a bare metal hypervisor.  I won’t revisit the many advantages including VDI but, I’d be really excited to see a client based bare metal hypervisor with wide support.

That’s why I got really excited when I heard about XenClient.  I didn’t even know it existed, which for a virtualization geek is a bad thing.  I imagined of all the major vendors VMWare was the closest to delivering a client based Hypervisor for general distribution.  Citrix however, has announced the tech preview of their second .0 release of their client based hypervisor OS XenClient.

I was extremely anxious to get it in my lab and play around with it.  I have a pretty decent home and work lab.  But most of my resources are geared toward server based OS’s.  It took me a great deal trial and error to find a machine that could run the OS.

The biggest obstacle I ran into is that XenClient only really supports an all Intel platform.  Intel and Citrix markets machines supporting Intel vPro as a solution.  I didn’t have anything in my lab at home that had vPro and I figured a minimum of v-XT would be needed to run a 64-bit hypervisor.  I assumed my Dell XPS 420 with a Core Duo Quad Core 9300 would do the trick.  The problem was the video card.

The Xenclient install completed but since the NVidia GPU isn’t supported the OS fails to boot completely.  My 420 sports an NVidia video card.  I also had an older Core 2 Duo Laptop with Intel graphics but the Core 2 Duo doesn’t support v-XT and the same with my general purpose white box server.  The only machine I had at my disposal that would meet the specs was my brand new 3 day old Dell XPS 15.  It has an Intel i7-QM2630 processor and NVidia discrete graphics.  However, the great thing about the i7’s is that they have Intel graphics built on the chip.

Luckily, I had brought an Intel 40 GB SSD drive a few weeks ago on an impulse.  That turned out to be the perfect device to throw in to test the configuration.  Installing multiple operating systems on a really fast HD makes a world of difference.

One of the great features of XenClient is support for 3D graphics.  Unfortunately, XenClient relies on Intel v-XD technology to pass 3d capability along to the guest OS.  My XPS 15 doesn’t seem to support v-XD.  No matter what I tried I couldn’t get 3D features working as I couldn’t find any reference to the technology in my BIOS.  For my test environment 3D would have just been a bonus but in production I would think it would be a hard requirement.  I wanted to see just how far Citrix had come with XenClient.

Installation

Once I found a hardware platform that would work installation was fairly straight forward.  Not a lot of options and ways to go wrong.  XenClient is obviously Linux based.  The entire install took about 10 minutes.

v-XD error message

Guest OS Install

The number of operating systems supported is limited.  I’m not sure why but Vista 64-bit wasn’t listed as an option while Windows 7-bit was listed.  I decided to install a Vista and XP image.  I found the build times to be rather long.  But, I believe this is more to do with the fact the XP and Vista’s installs both take a long time.

I did notice that the default for the number of virtual processors was not nearly enough CPU.  I had to upgrade the Vista partition from the single core to 4 cores not to notice a delay in the guest system.  I could however notice the lack of graphics acceleration in Vista.  The interface was beautiful on my 15.6 inch 1080P display but sluggish at times.  XP was well XP.

OS Selection Menu

Networking is handled at the hypervisor level.  XenClient found my Intel based wireless adapter and I had no problem connecting to my WiFi network.  XenClient presents the network as wired Ethernet to the virtual guests.

Conclusion

I really want to like XenClient.  I actually like what I’ve seen so far.  My biggest complaint is that of my original article.  The lack of universal support across multiple machines will limit the potential of the solution.  I commend Citrix for under taking this challenge and putting it out there for geeks like me to play.  The hardware requirements are steep.  I really want the flexibility of using the solution as a bring your own PC to work solution.  But the requirement of vPro prices most home users out of this platform.  The lack of AMD and NVidia support also make for a limiting solution.

Citrix has done an admirable job on XenClient.  For a technical preview it’s a really cool example of what can be done using client side virtualization.  I just really wish they had the resources to expand the HCL beyond the few officially supported systems.  Until there’s broader support I can only see this as a niche solution.

If your interested in playing around with it you can download it here.

update: 5/31/2011

Thanks to Dominic Pedroza for pointing out another solution that may be a little more mature MokaFive Baremetal.  A short article on The Register can be found here.

VDI vs Application Virtualization

I got asked a pretty interesting question the other day.  If an organization deploys Virtualized Applications or Desktops via a virtualized application solution such as XenApp why would they need to deploy Virtualized Desktop using VDI or vice versa?

I gave a breakdown on application virtualization and desktop virtualization in an earlier post.  The basic difference is that application virtualization is based on a solution such as Windows Terminal Server while Desktop Virtualization is based on server based Hypervisors such as vSphere or XenServer.  The desktops are then delivered using a provisioning/broker service such as XenDesktop or VMWare View.

A better question is why not use both together.  On the surface VDI may seem to solve the same problems as application virtualization.  But VDI isn’t a replacement for virtualized applications.  In my opinion the best use of VDI is to add a greater level of flexibility to your virtual infrastructure.  One of the largest challenges with supporting virtualized desktops in a virtualized application environment is that you are trying to deliver client desktops utilizing a server operating system.  Also, resource management can become an issue as terminal services are not as scalable as hypervisor based solutions.

VDI allows you to deliver a much richer desktop experience to end users without a large investment on desktop resources and the associated administrative overhead.  However, application delivery is still a challenge.  On the surface there is no technical challenge in delivering applications with the desktop image.  The challenge comes in maintaining the applications.  In order to make a simple change to the applications within the desktop image while maintaining a single “Golden Image” you must redeploy the entire image to all the VDI instances.  This requires downtime and can take hours with several hundreds of desktops.  Most virtualized applications allow you to make changes on the fly without down time or even the user noticing the update.

Cost savings may or may not be a driver in implementing VDI.  In looking at a VDI project you need to do a deep dive on what problem you are actually trying to solve and if you’d be displacing virtualized applications or augmenting the solution and adding additional functionality.

vSphere inside of VMWare Workstation Performance

I’ve been debating over the past few months as to buy a new desktop.  My current desktop is still relatively a decent machine.  It’s a 3 year old Dell XPS 420 with a Quad Core 9300 and 6GB of RAM; a decent machine for today’s power user.  I have no complaints when it comes to my day to day computing.

I’ve even been able to do some pretty basic vSphere labs inside of VMWare Workstation.  The problem that I always run into is RAM.  My machine is maxed out with 6GB.  For virtualization labs this is on the lower end of what I’d like to see.  I recently purchased a Sony Vaio laptop with a first generation Intel i3 processor and upgraded to 8GB of RAM.  How does this compare to the equipment I normally use?  Most of my production VMware machines have at least 72GB of RAM.

The lab I manage at work has 3 HP DL 370’s with 16GB each.  I complain all the time about the limited memory.  So, I’ve been a little more than skeptical about running my home lab with less than 16GB of RAM.  I’m from the school of “buy as much RAM as you can afford and then beg for some more money to buy more school.”  That’s why I’ve been looking into Dell’s XPS 9100 with 24GB of RAM.  That should make for a pretty decent home lab.

I’m not a huge fan of running virtualization labs on a laptop but this machine does have some pretty decent specs.  I’ve seen other bloggers post positive opinions with lessor powered Mac Book Pros with 8GB RAM.  I still had a perception that this just isn’t that much RAM for nested virtual machines.

But, reading more and more about laptops with modern processors running a nested vSphere lab within Workstation kept me wondering if I’m over spec’ing my new desktop.  So, I decided to go ahead and build the lab on my Vaio and post the results.

I wanted to get as realistic as a lab as possible.  I decided on the follow layout.

Server Hypervisor Environment Specs
vCenters Workstation 1.5 GB RAM, 40GB Thin disk, 1 CPU Windows 2008
Openfiler 2.3 Workstation 1 GB RAM, 100GB Thin disk, 1 CPU
vSphere Workstation 2 GB RAM, 40GB Thin disk, 2 CPU, ESXi 4.1 update 1
vSphere Workstation 2 GB RAM, 40GB Thin disk, 2 CPU, ESXi 4.1 update 1
Windows 2008 ESXi 1 GB RAM, 40GB Thin disk, 1 CPU

My Vio is running Windows 7 Home premium and Workstation 7.1.2.  Without taking into account the host operating system my laptop’s memory is pretty close to being oversubscribed.

Results

I fudged around with getting everything installed in a few hours.  I’ve never used Open Filer before so that slowed me up.  I also didn’t have all of my ISO’s readily available at home so that really slowed me up.

Memory

Overall I was surprised by the results.  My laptop prior to powering up the VM’s sits idle with 1.8 GB of RAM used.  After firing up all the VM’s including the single nested 2008 server my used RAM went up to 6.75 GB used during the installation of the nested machine.  The most observed memory usage was 7.4GB when I performed a vmotion between the virtual ESXi servers.

CPU

I typed this post on the same laptop while all of this was running in the back ground and I have to say that I didn’t even notice a performance hit.  My 4 logical cores (the i3 is a dual core CPU with HT enabled) were relatively idle with the virtual machines addle.  I did begin to stress it a bit when I installed the nested instance of 2008 in the vSphere cluster. I saw overall CPU usage peg to %75 spread across all 4 logical cores when performing a vMotion.

Conclusion

You won’t mistake the experience with any production system with multiple physical hypervisors but it works for a lab.  I even had DRS enabled and successfully performed a vMotion or two.

The bottom line is that I could realistically get a Dell XPS 8300 with 16GB of RAM and be fairly happy for a home lab.  But I’ve been really looking forward to running some complex lab scenarios with VMWare and GNS3 and if you ever used GNS3 it’s both a memory and CPU hog.

This is a great example of that %5 engineering rule.  I’d be building an infrastructure based on 5% of my usage pattern – just not a smart way to do things.  Let’s see which logic wins over the next couple of months.

Update 5-6-2011

I stumbled along a great post on 8 nested ESXi nodes and 60 virtual machines on a single physical host with 8GB of RAM.

%d bloggers like this: