Is the iPad a Better Thin Client than a Workstation?

With the release of the Citrix client for the iPad I have to ask the question.  Is the iPad a better Windows 7 Tablet than any native device that PC Manufactures can create?  I want to look at the iPad as a serious Thin Client device.

I recently purchased an iPad to play around with at home.  I’ve always wanted a similar device that I can use in my day to day work as an Infrastructure support person.  I’ve often wished I had a light tablet form factor device that gave me access to Outlook, Sharepoint, Shared Data and Office while I’m in the Data Center(DC).

Other than configuring a router via a serial cable, I don’t have much use for a laptop in the DC.  What I do need is the ability to access mail, shared documents, administration tools and remote desktop applications on a screen larger than a mobile phone’s screen.  There have been many a time where I’d spend a couple of hours documenting something in my DC just to come back a couple of weeks later because I can’t read my own handwriting.  Windows based tablets and phones never did the trick.  They were either too heavy or to small to be useful.

I’ve had several Windows based convertible laptops.  I like the theory behind them.  A lot of people argue that Windows on a touch based device is a flawed concept.  I disagree.  The problem isn’t(just) Windows.  The problem is the lack of applications that are designed for a touch interface.  Windows does have problems scaling down power consumption and system requirements for a device this size but that’s another discussion.

I downloaded the free Remote Lite application from the iTunes store tonight to play around with connectivity to my desktop.

I was pretty impressed with this basic application.  After doing some research I discovered that a few more of the paid applications add some pretty interesting gesture based options to the interface.  I was able to move around in Windows pretty well.  Playing video wasn’t very effective but I believe this could be optimized.  It has the potential to solve a good number of complaints I have about the iPad – no Flash, Multitasking, Limited Storage, Printing and etc.

This got me to thinking.  I could see a larger developer like VMware or Citrix really enhancing this experience.  If they created a client and coupled it with one of their VDI solutions you’d have a really special combination of hardware and software.  Not just a great thin client but game changing remote computing platform when coupled with 3G.

Am I chasing a solution to a problem that doesn’t exist?  What other applications could this be useful to make the investment worth it?

Keith Townsend

Virtual Host Security

Security is a never ending battle for us folks in the business of IT Infrastructure.  There are always new threats that we need to consider from every layer of the network.  Now that virtualization is becoming a huge part of the infrastructure, it’s a good idea to extend our security policy to include virtualization challenges.

I wanted to take a look at some of the common challenges to consider within VMware.  Specifically the VI3 platform as I’m running into this platform %90 of the places I go versus vSphere which has a completely new model and API available for securing your virtual environment.  I will take a separate look at Hyper-V, XenServer and vSphere at a later date.  Since VI3 is so prevalent it’s the audience that I believe I could touch the most.  It’s important to note that these principles could apply to the other platforms as well.

So, what are the security challenges with hypervisors?  Out of the box the kernel and consol are pretty secure.  There aren’t a lot of services that could be exploited running by default.  There’s a firewall enabled by default.  And communication is over SSH and SSL.  These are all things we should expect but here are three areas of concern.

Guest OS

One of the first area’s to look at would be the guest OS and services.  The vulnerabilities of the guest OS could easily become the not so obvious vulnerabilities of the hypervisor.  I’m not going to pick on any one operating system as things issues are common amongst all OS’s that provide services.  One thing to really consider is DoS attacks against the VMware host through a subject able guest OS or service.

An attacker could direct a DoS at a service running on one guest OS which could affect the performance of the physical hardware.  This in turn could affect other guest operating systems.  This is why it’s important to have system monitoring in place for your hardware and applications.   This is where tools like vMotion could really pay for themselves as you can isolate servers that are experiencing high utilization or suspicious activity.

Network Isolation

It’s extremely important to fully plan out your virtual network and physical network layout and the access lists governing control between the two.  It’s been my experience that the team that manages the virtual switches and the team that manages the physical network are two separate teams.  I personally think that this is a mistake.

I have experience as both a Network Engineer and a Server Administrator and have a strong understanding of routing, switching and access control.  This is a critical skill when dealing with an extremely large virtual environment.  I find that when I wear both hats I have conflicting agendas.  The network engineer in me wants to think security first but the server administrator wants the course of least resistance.

This leads to shortcuts and poking holes in VLAN configurations by using static routes between Virtual Machines on different network segments.  These shortcuts are normally undocumented and come to bite us in the rear sometime in the future when we least expect it.  Worst case hopefully its internal audit doing a review of controls and not some bad guy taking advantage of our laziness.

Virtual Center Clients

This is an area that we may not give much thought to because the list of people allowed to access the console is limited.  But it’s this area that we need to pay a great deal of attention.  I’m very reluctant to give access to the Virtual Center Console to Jr. Level Administrators.  Even when configured correctly by restricting rights to virtual machines through Directory Services it’s important to realize how big of a security risk it is giving access to someone who doesn’t have the appropriate training in Virtualization or even security.

This is an area that can lead to a great deal of damage if an administrator is lacks about securing their desktop.  This is why it’s also importing to have the appropriate level of logging configured to re-enforce the security policy with accountability.

There are plenty of other area’s to look at like iSCSI security, Storage Network and device level challenges.  I’ve provided a few links at the end where you can get much more detail on securing you virtual environment.

Useful Links

I found these useful links that give more detail in securing your virtual environment.

VMware Harding VI3

http://www.vmware.com/files/pdf/vi35_security_hardening_wp.pdf

VMware vSphere Hardening Guide

http://communities.vmware.com/docs/DOC-12306

Keith Townsend

P2V Migration Thoughts

I had a project where I had to do a decent sized data center conversion without a lot formal planning.  We had a SaaS product offering built on a VMware infrastructure that was bigger than the demand for the product.  So we decided to leverage the extra capacity by doing a data center consolidation.

We had two additional Datacenters with about 70 Windows Servers.  It was pretty much a no brainer.  Our new hosted DC facilities were more secure, had more bandwidth and more redundancy with the SAN and the VMware server cluster.  We would end up saving the company a good deal of money on hosting services and building leases.

The legacy DC’s weren’t being actively managed so the inventory wasn’t that great and a lot of the institutional knowledge had been lost over the years.  Since we didn’t originally plan the VMware infrastructure for these specific services we had a bit of retro fitting to do.

We had some of the following tasks to consider or re-architect prior to conversion – just to name a few.

  1. Network Design
  2. New Disk Layout and Service Levels
  3. Backup Schemes
  4. vCenter Security
  5. Migration Schedule
  6. Application Inventory

You can imagine the complexity of each task.  The migration schedule alone had several logistic concerns ranging from user notification to the physical migration of data.

What’s interesting was some of the stuff we discovered as the project got underway.  One of the biggest is that a server that behaves badly on physical hardware will not behave any better on virtual hardware.  This wasn’t a surprise.  What was a surprise was the amount of resources we had to dedicate to fixing the existing environment before we migrated services.

Some of this stuff was really old.  I had one box that was running Windows NT 4.0 with a whopping 128MB of RAM.  It was by far the hardest box to migrate.  The system partition was only 4GB with 10MB (that’s right MB) of free disk space.  I remember those days.  It was a pain 10 years ago and it’s a pain for stuff still in production today.

One great advantage was that it was relatively easy to fix some of these performance issues.  If a box didn’t have enough physical memory – no problem just add more in the virtual environment, the same solution for disk and processor.

I made the mistake of thinking in terms of how long it would take to migrate physical machines with no performance problems.  Since we didn’t have a good since of the complexity of the applications and the challenges of the existing environment we under estimated the project by 2 months.

One thing I’m happy we did was not bite off more than we could chew in the project scope.  We were tempted to do some Active Directory consolidation and other application streamlining.  We decided to make them completely separate projects which saved us a ton of grief.

What are some of your post Migration thoughts?

Cloud vs. Virtualization

I had a pretty interesting conversation on Linkedin about Cloud Computing.  A member of one of the groups I’m in poised the question – What’s the difference between App Virtualization ala Citrix and Cloud Computing.  I stated that Citrix can be used to provide Cloud platform or services.  Two other members suggested that it was more of a fringe type of cloud computing.

One member suggested that one of the requirements of cloud computing is that it’s more scalable and resilient than a Citrix type of solution or something that can be done in the traditional enterprise.  This got me to thinking of some of the offerings that are labeled Cloud.  Some of the services that came to mind are the big players in the group.  They include Amazon’s S3, Google Apps, Salesforce.com.

But then I thought of some of the other services that are marketed as “Cloud” offerings.  I see Hosted Exchange offered all over the place.  I’m certain that plenty of these providers don’t have multiple datacenters and the resiliency of gMail.  I also brought ADP into the discussion as they offer a “Cloud” product based on Citrix.  So, it got me to asking the question what is the definition of “Cloud”?

I found this InfoWorld article that explores the question very well.  The author establishes two basic categories of Cloud Services.  There’s the Computing on Demand model such as Amazons S3 and then there’s everything else.  They then go on to break it down into 7 different types of cloud offerings.

I tend to define Cloud Computing as anything that provides services to the Enterprise via the Internet or Private connection and is supported and maintained by a 3rd party.  This could be a SaaS offering like Salesforce.com or virtual servers provided by Rackspace.  The basic need is met which is to extend/expand enterprise services without expanding the infrastructure.  This is one of the many advantages of Cloud Services.

So, I l believe that a Citrix based offering can be defined as a Cloud Service.

I’d love to hear feedback.

Why No True Network Virtualization

So, I want to talk about network virtualization from another angle.  We know that with VMware you can create virtual switches and even outsource the process to the Cisco Nexus product line.  I think this should actually go farther out to include chassis virtualization. 

I worked for a pretty big hosting provider for a very short period of time and one of the issues we ran into was multi-tenancy.  For a smaller enterprise data center multi-tenancy isn’t too big of an issue that VRF and the like or even multiple chassis wouldn’t solve.  But for larger data centers this becomes an issue.  There are a couple of issues to address from physical space consideration to management and cable plant issues.

There are many instances where both internal and external customers would like the peace of mind that comes with virtualized hardware on the network side of the equation.  A good example would be a customized solution for a single customer or a set of customers in a shared cabling plant.

Today if you want to create this type of environment in the Cisco IOS world you’d do it via ACL’s, Route Reflectors and etc…  Why not just create a virtualized switch inside of the chassis?  A completely separate instance of the IOS to just simplify the whole configuration.  It would allow you to assign separate security settings for each instance.  I don’t know something like what Extreme has been doing for the past few years http://tinyurl.com/vojus.

I figured if Cisco can create a server with 512GB of RAM they could be able to virtualize their core offering – IOS.

I don’t think this is too farfetched of a request.  I like to play around with GNS3 located at www.gns3.net.  It’s a great little tool that is actually a hypervisor for Cisco IOS on Wintel platforms.  It’s not meant for production but technically there’s nothing stopping you from using it to do some really cool stuff in a lab.  You can map physical or virtual interfaces (think VMware workstation) to the logical Ethernet ports of the virtual routers.  You could in theory create a virtual DC of VMware servers on a single workstation running a virtual MPLS end node.  Connect that to another workstation running another virtual DC and MPLS node and have you a nice MPLS cloud running on one or both workstation.  If you have a beefy enough machine it could all run on one workstation.  If Cisco sends me one of those blade deals, I’d be more than happy to let you know how well it works.

My biggest complaint about the product is that you can’t virtualize Cisco switches.  You can do routers on a stick because you can still associate a physical NIC on your workstation to one connected to a Cisco Switch.  I’ve found it an invaluable tool for creating lab and test scenario’s.

What is Application Virtualization

This is a pretty good article explaining application virtualization here.  I think all the virtualization terms can get rather confusing.  I first got introduced to application virtualization through Altiris a few years ago.  I thought it was a good platform for us IT folks that would commonly install and uninstall tools for testing on our own workstations.  It basically creates a layer between the virtualized application and the OS.  You could install the application within this wrapper and then the application would make requests to the OS through Altiris.  You could then completely uninstall the app by clicking a button and all traces would be gone.  The problem I encountered in the early form of the product was the lack of management and delivery tools to be used in a wide spread enterprise deployment.

I didn’t follow it much after I started using full OS virtualization for my test environments.  However over the past couple of years companies have started to make application virtualization part of their product stacks.

Now VMware offers Thinclient and Microsoft the solution they brought from SoftGrid.  In both cases instead of the requirement of installing the underlying application virtualizer all of it is packaged in a single executable.  This allows the application to be delivered through network shares or other system management applications.  With the VMware solution you can basically put the application on a “stick” and carry it around with you and in theory use it on any PC.

This isn’t to be confused with application streaming.  Which is the traditional desktop virtualization brought to you by Microsoft and/or Citrix in the form of terminal server.  This is your traditional OS streaming technology that relies on RDP/HDX/ICA.  In general they require at least a constant 56Kbps connection from the client to the terminal server to deliver the application or desktop and most if not all of the processing is done at the server.

This is just something to remember when you think about desktop virtualization vs. application virtualization.

ESX inside of VMware Workstation

I was talking with a VMware ISV Health Evangelist the other day and he mentioned with VMWare Workstation 7 you can now run vSphere 4 inside of VMware and have nested virtual machines. I thought that was curious as I have 6.5 and had heard you could already do it.

Why in the world would you want to be able to do this at all? The basic answer is your ability to Lab vSphere without having a dedicated box. This makes for all kinds of interesting scenarios. You could get an open source iSCSI server, virtualize it and then lab vMotion and vHA. This is without having the underlying physical requirements for ESX like SCSI or SAS hard drives.

Well I thought I remembered seeing you could already do this and low and behold it has been done. I purchased the Trainsignal vSphere training package awhile back and David Davis the instructor for the video series walks you through the process. You can find that portion of the video here .

In short you need an Intel processor that supports VT or an AMD processor that supports AMD-V. Of course with any virtualization you need as much RAM as you can get. I was successfully able to create a virtual instance of vSphere which had a nested instance of Fedora running inside. In addition, I had a Windows 2003 server running my Virtual center and everything ran smoothly with the exception of Fedora which ran a bit slow which is to be expected.

My system is a Dell XPS 410 with 6GB of RAM with a Quad Core Intel Q9300 running at 2.5 Ghz. I was able to keep all this running for a couple of days in the background without really noticing any performance issues running my day to day web browsing and word processing.
Next is to create an iSCSI SAN and implement vMotion.

Hello world!

Over the past year I’ve tried to label my core professional brand.  I cut my teeth as a Novell Netware Server administrator over a decade ago.  Since, I’ve had job responsibilities that have included Windows Server administration, Network Design & Implementation, Desktop Engineering, e-Mail implementation, Project Management and the list goes on and on.  I have an old MCSE and have held my CCNA since 2001.  I don’t really have a passion to do any of it individually enough to be labeled my “brand”.

I do have a passion for the overall responsibilities and duties when you combine Network, Servers and Applications.  What do you get?  IT Infrastructure the plumbing that makes IT tick.  So, this has become my brand.  I’m an IT Infrastructure Leader/Engineer.  I rather like the title feel free to use it as I’ve yet to patent it 🙂 If you want a full look check out my linkedin profile www.linkedin/in/kltownsend.

This brings us in a long roundabout way to virtualization.  My career has gravitated to virtualization because infrastructure itself is where IT shops realize all the benefits of virtualization.  It allows me to utilize all the cool (and not so) skills I’ve amassed over my career.  It gives me a great platform to talk about everything from Security to Storage and Server Hardware because it all can be virtualized.

I expect to have a great deal of posts because I have a lot to say and learn about the topic.  Stay tuned.

%d bloggers like this: