I’ve owned both a laptop and a desktop since Windows 95. I’ve also had a home network since those days. And I’ve always had the same problem. Syncing my documents between the two has been a constant challenge. I’ve tried the Windows Briefcase which was a joke.
To the cloud you say? I anxiously awaited the Gdrive from Google that never came. Amazon, Box.net and a ton of other companies offer basic cloud based storage. These are all solutions that have really never appealed to me. I wanted a solution that allowed me not to change the way I work with my data. I didn’t want my primary target storage to be “The Cloud.”
I read about Windows Live Mesh and Skydrive a few years ago. I used Skydrive pre-Googles Doc to collaborate with class mates and business partners. It worked but I discovered most of my peers were not too interested in opening a Live account if they didn’t already have one so it kind of went unused for a couple of years.
Windows Mesh caught my attention again after Amazon announced their Cloud Drive the other day. I checked out the new Amazon service and it was more of the same. I’m not to incredibly interested in storing my music in the Cloud. I use Rhapsody and it suites my needs fine. I remembered Windows Mesh and installed the agent on my desktop and laptop.
Its as almost magical as an Apple product. It’s exactly what I was looking for. It sync’s my primary documents folders between the two PC’s and allows me to access the data from a web interface. To boot, files less than 50mb are editable using Office online. In addition, it has a remote control app. I use logmein but it’s not a bad free throw in. Not bad at all.
My biggest complaint is that I’m limited to 5 GB of data and there is no way to upgrade to more storage. Dropbox has a very similar service with a free 2GB limit option and a 50GB option for 9.99 a month. Box.net also has a solution for 15.99 per month. I’m tempted that way I could sync my photo collection as well but I for the most part trust MS for Cloud solutions.
Hopefully they don’t ever kill Windows Mesh. “To the Cloud” indeed.
In my current role, I’m challenged with building a private cloud for fairly large government organization. Most organizations have embraced the concept of virtualization and have started to at least virtualize workloads that make since such as webservers, file and print. Forward thinking organizations have even embraced virtualizing high performance workloads such as Exchange and Database servers.
What I’ve found is that moving to the next phase of Cloud computing has been a bigger challenge. For the right organizations a Private Cloud can have some great advantages. Specifically, if an organization has dynamic workloads with the need to scale up and down based on use. A private Cloud could provide cost and operational benefits.
The most challenging part of the design process has nothing to do with virtualization. It has been finding the right Cloud Manager. A Cloud Manager pulls all of the underlying components of your virtual data center – Disk, Hypervisor(s), CPU and Memory and allows you to establish business rules around these components to allow users to self-provision services and provides a charge back vehicle. I’m especially interested Cloud Managers that support Infrastructure as a Service.
There are plenty of Cloud Managers out there. VMware has vCloud Director, Novell has Cloud Manager which is built on their Orchestrator product, Dell has a self service solution and then there are open source solutions that have closed source products. These include Euclyputs and Abiquo. Rackspace sponsors a purely opensource project called OpenCloud. I don’t even want to get into Cloud Manager as a service offerings and Microsoft’s Hybrid approach.
These products are all very young relative to virtualization. Some have some great features both the Novell and Dell offerings are built on data center Orchestration products which gives and incredible amount of flexibility for both virtual servers and physical servers. The Dell solution let’s you move workloads from virtual to physical hardware which for a private cloud is a great feature. Novell also has a suite of products they purchased from Platespin that helps strengthen their data center automation.
The “pure” hypervisor focused solutions have great features as well. VMware’s vCloud has of course a very high level of integration with VMWare but doesn’t work with other hypervisors. So, if you have SLA’s that don’t require ESX then it’s overpriced and you’re locked in to VMWare.
Abiquo allows you to move from hypervisor platform to a different hypervisor platform and supports a wide range of hypervisors and even supports Amazon. The challenge is that they are a small and new company. For a large organization this is no small risk. I do like their solution however.
And all of these solutions are highly complex requiring at the minimum of a week of professional services and in some cases 6 weeks for orchestration based solutions.
This has really been a learning experience. Do you have a Cloud Manager solution that you’ve found to be great or any lessons learned that you can share? Or did your organization decide to build your own Cloud Manager?
This perplexed me for a few hours while trying to install Xendesktop 5.0 using the quick deployment. When it was time to connect to the vCenter to add my VMware hosts I kept receiving this error – “The hypervisor was not contactable at the supplied address.”
I was able to find plenty of advice on the web about importing the certificate for the vCenter sdk website. As it turns out the problem wasn’t the certificate itself but the name of website within the certificate.
If I would have paid attention to the actual message in the certificate notification I would have noticed that the name I was entering didn’t match the certificate. This is security certificate 101. I assumed the DNS name of the server was the actual Tomcat name of the website. What I didn’t realize is that the name of the server was changed after vCenter was installed.
In short if you ran into this problem make sure the URL you are entering for the vCenter SDK is the same as the one associated with the website security certificate.
Follow on to my post on running ESX inside of VMWare Workstation 6.5. I’ve finally taken the plunge and spent the $100.00 (yeah these are tough times) and upgraded to VMWare Workstation 7.0. The best feature to me has been the ability to run vSphere natively within VMWare Workstation. The install takes minutes and I haven’t run into any major issues.
My biggest issue is lack of memory on my workstation. My desktop has 6GB of memory and has an older Intel Quad Core processor. My laptop is an i3 with only 4GB of memory. The new Dell 9100’s can be expanded to a whopping 24GB of RAM. That’s pretty nice and would make for a great virtualization home lab.
The only problem is that the 24GB of memory is a $660 upgrade option and Dell forces you to upgrade to Windows 7 Professional at another $130.00. That’s $790 to get the memory I’m looking for in my lab. If I had a difficult time justifying the $100 upgrade to 7.0 just guess what I’m thinking on paying $2k for my new home rig.
Well while I’m fantasying about a new desktop I’ve went ahead and ordered 2 DDR3 4GB modules for my laptop that will take me up to 8GB of RAM. I’m not too excited about it because I do very little lab work on my laptop because it’s just inconvenient.
I had a meeting the other day that emphasied the importance of communicating the concept of the virtualized Data Center. I made the assumption that everyone who has been working with virtualization has come up to speed on the concept of the virtualized data center. This actaully isn’t the case.
I was speaking with a group about segrating network traffic coming from virtual machines hosted on the same physical hosts. I mentioned more in passing to just enable DOT1Q vlan trunking on the physical host. Problem solved let’s move on to the more complicated stuff, right?
Not so fast there speedy. Everyone in the room looked at me like I was speaking a foreign language. I had the luxury of being introduced to virtualization from the approach of virtualizing the datacenter. We were using a virtualized storage solution, Virtual Network and hosting multiple clients on the same physical hosts. So, my first hypervisor install was within the virtual data center. I also didn’t mention that the data center was hosted.
I’m working in an industry where technology decision makers move slowly to new technologies. Virtualization is no longer a “new” technology from the hypervisor perspective but the virtual data center hasn’t really caught on to the mainstream non-server crowd yet.
The concepts of networks, servers and storage don’t change much when you move to a virtual platform. What’s difficult is understanding how these concepts apply to the data center. It’s helpful to go through some of the challenges of Cloud providers in helping to solidfy these concepts.
The server is a good place to start. You can think through the first problem of how do I host not only multiple applications but also multiple clients on the same machine? What problems does this present. How about we tackle the big one – Isolation.
CPU, Network and disk need to be isolated in each security zone. How would you do this in the physical world? You’d create separate hosts,networks and disk infrastructures to support each individual client. How is virtualization any different? It isn’t. You would still create the same infrastructure but use the tools of your favorite virtualization product(s) to achieve the same goals.
I don’t want to get into the technical approach to this problem because I think this is where creativity can happen and I’d like to see how others would start.
Have you tried to sell the concept of a virtual data center or private cloud? What has been your experience in selling it to the old guard in the form of management and even security?
Wow!!! August since the last blog post. It makes sense because I’ve relocated to Maryland where I started with Lockheed Martin supporting their Washington Data Center in September. I’ve been extremely busy working on virtualization projects, e-discovery and disaster recovery. Not to mention I’m still working on my MS in Project Management. But, I have decided to take more time and make sure I blog about Virtualization.
I’ve been spending a bit of time coming up with different ways to approach a virtualized environment. At least differently than I have in the past. My past couple of large projects, HP Blade systems fit nicely into my needs for larger efforts.
Now that the AMD 6100 12-cores have been out for some time, I’ve been looking at using Dell’s R815 platform to get denser deployments. I’ve found that one of the problems that arise from using a platform that can have 48 cores in one chasis is that I/O becomes a greater consideration.
In theory you could get 192 (48*4) single vcpu VM’s in a single physical server. If you have a rack of 20 of these beasts that’s potentially 3840 VM’s in a single 42U rack. That’s a lot of I/O. Storage and network become huge consideratios for your infrastructure if you are looking at hosting at this level.
The great thing is that I’m learning more about storage than I have in the past and the other is I get to brush off my networking skill and deal with some of the complexities of a virtualized infrastructure. I’m looking at solutions like Arista’s 7100 series 1U 10GB switches. I’m also looking more smaller SAN storage solutions like Netapp and EMC’s Celera line of SAN’s.
I’m looking forward to continuing to share my experiences and look forward to an active year of blogging.
Remember the late 90’s trend of “Server Consolidation”. Back at the turn of the century server consolidation was the other hot topic next to Y2K.
Organizations were suffering from server sprawl. To alleviate the issue a movement to consolidate servers was undertaken across the industry. The concept was very simple. Combine application and services on servers that had complimentary application load schedules. An example was to take your Clarity Application server and share hardware and OS with your Backup server.
Backups are ran at night and Clarity is used during the day; Perfect marriage right? Well the reality was very much different than the concept. What we discovered was that the Backup guys always wanted to reboot the server and performed maintenance during the times that the CRM couldn’t take an outage. In addition the Clarity guys wouldn’t support the application because it shared an OS with Arcserve. The result, you could never achieve the lofty consolidation numbers initially thought possible at the onset of the project.
Why do I bring this up you ask? The same lessons can be learned about P2V virtualization initiatives. When presenting the idea of virtualization to senior management, make sure you do your research when it comes to the nature of your applications. Today’s hardware and virtualization software are light years ahead of the tools I used 10 years ago but that doesn’t mean you can consolidate everything in your datacenter into a virtualized platform.
The main problem isn’t obvious performance concerns. There are other hidden considerations. I’ve found the biggest to be application vendor support. There are a lot of software vendors that don’t understand virtualization. They don’t have the internal skill to understand how to support their applications in a virtualized environment. So, most just opt not to support it at all.
So, even though that application is running on an old beige box doesn’t automatically mean it is a slam dunk virtualization candidate. When doing your due diligence for a presentation on your virtualization project make sure to contact all of your application vendors and see if the application is not just certified to run on a virtual platform but the platform you are considering is supported by your vendor.
Lance Ulanoff of PCMag.com posted an extremely interesting theory about where he believes Windows 8 should go architecturally. PCMag.com is more of a consumer/enthusiast site vs. a location I’d normally go for some hardcore virtualization discussion or insight. But I have to say he intrigued me with his idea of where Windows 8 should go.
He suggested something I’ve desired since being introduced to ESX; a completely hypervisor based client OS. I know that VMware has promised such a thing in the past but I’d have to guess drivers are probably being one of the main issues of delivering a shipping product. ESX has traditionally had a tight list of approved hardware configurations. I can’t begin to imagine the challenges they have in trying to deploy a distribution of a client hypervisor that could run on the millions of combinations of PC class hardware that’s available. But if any company is experienced and able to provide the type of hardware vendor support and buy in needed to pull off this new paradigm in computing it has to be Microsoft.
It’s quite overwhelming the number of possibilities that one would have if the client OS and services were built on virtualization. Every application would\could be a sandbox or VM instance. Just off the top I could think of a couple interesting uses for this feature alone. First the idea that you can take snapshots of an individual application state and roll an application back to a date and time vs. the whole system. You could take images of an application and have that same application VM loaded on whatever device you’d prefer. Just think of the implications or opportunities for cloud computing.
This could open a can of worms for MS as well. People could then become less dependent on the Windows ecosystem. In theory, applications can be built in Linux virtual machines and deployed on the Windows 8 client just as easily as any other Windows based app. From a developers perspective this would be great. You could build your applications on one platform and run it in any; kind of the promise of C++, Java, Flash and etc.
Seeing that Windows 8 is at least a few years away, I’m very interested in whether or not VMware’s client hypervisor will ever be released.
I’ve learned over the years to appreciate formal training courses. The foundation of my IT knowledge is from self-study. I achieved my MCSE, CNE, CCNA and etc through a combination of job experience, home lab and self-study materials. However, I’ve taken courses in school and with vendors that have helped me better understand technology in a way that self-study couldn’t.
So, I’ve been debating on what Virutalization course to take. All of my knowledge has been gained from having to implement this technology out of need. I’ve hired VAR’s to assist with the architecture and some implementation but the rest has been trial and error.
One of biggest challenges in selecting training is the cost. The VMware Install Configure course is $2500. That’s cost that comes straight out of my pocket. The advantage is that I would qualify to become a VCP which is always a good career move. But this course is from a VMware vSphere only lens.
My other option is a Virtualization boot camp that many of the non-VMware partnered training outfits offer. The advantage of this course is that it covers a broader range of products. There’s some Hyper-V and XenServer offered in this course. Albeit not a lot, it’s nice to get formal training on non-VMware stuff that I will see in the real world. The major disadvantage is that it doesn’t buy you much in the area of credentials. Even if it covers all the material and more than the VMware course you don’t qualify for the VCP.
As my project migrating 450 servers to vSphere starts to down wind down I’ll start to look more into class schedules and options. I’ll make sure to post a follow up.
I remember a disturbing cover from 1987 or about when I was in High School in either PC Mag or PC World. The title was “Is Dos Dead?” I even remember the cover with a C:\ in flames in a lake of fire or something like that.
That was my first experience with being a Fanboy of a specific vendor or product. I remembered thinking, “Oh No! What will I use if DOS is gone?” There was OS2 and a bunch of other hot new technologies coming done the pipeline that my 8088 based Tandy 1000 wasn’t going to be able to support. My parents had dropped a good deal of cheddar on that thing and it had to last me through HS. In addition, I invested a lot of time learning DOS and had no desire to learn a new OS.
This experience had started a trend for me. I’ve noticed over the years how I tend to be endeared to certain technologies. It’s gone from DOS to Windows 3.1 to Netware/NDS to Windows 2Kx. Even now with exciting technologies such as iPhone OS, Linux and non “mainstream” virtualization technologies, I find myself having a soft spot in my heart for a particular vendor or solution.
Apple’s eclipse of Microsoft in stock value brought up these memories. I really enjoy Microsoft products. But as a person who has to make technology decisions for organizations and live with my decisions afterward, I don’t have the luxury of being a diehard Fanboy of any one vendor or technology. I’ve had to select plenty of solutions over MS products, ranging from VMware to LANDesk and early on Netware.
However, when certain vendors like Palm and MS aren’t competitive in markets they created I tend to have a somewhat emotional reaction. I really want to like the Palm Pre just like I really want a Windows Phone 7 or Windows 7 Slate over an iPad. But I have challenges that need technology solutions that exist and work today. So, I guess iPad here I come.
What companies or products do you wish you could really push or get behind but just hasn’t cut the mustard?