Docker – Linux Application Environment Virtualization


One of the readers of the blog Lennie pointed out this YouTube video of an eBay engineer explaining how they are leveraging Docker for building portable applications in Linux environments.

Docker tackles one of the challenges faced in building applications in Linux environments.  The dependency of libraries or services being available on the target system can be a difficult nut to crack.  This isn’t as big of an issue in say Windows (did anyone say .NET Framework version X required?) as it is in Linux environments.  The challenge is pretty simple.  A developer builds an application in an Ubuntu workstation distro for example and then needs to deploy it to a cloud based VM running some other flavor of Linux with different default services and libraries.  To further complicate the use case the application may need to be spun up on another cloud provider using a different Linux distro with different base services installed.

docker_io

Solutions like Docker hope to solve this challenge by creating application containers that have all the dependencies virtually available.  For us Windows veterans we’ve seen similar solutions for delivering desktop applications across different Windows platforms.  A great example is VMware’s ThinApp.  ThinApp allows an administrator to package an application like Office 2013 into a virtual container.  That container can then be deployed onto any Windows platform regardless of the software running on that target system.

The major difference between the solutions is that Docker is positioned to help app developers solve the challenge of building applications for Cloud environments where they cannot depend on a consistent platform environment with all the required dependencies.

 

Published by Keith Townsend

Now I'm @CTOAdvisor

5 thoughts on “Docker – Linux Application Environment Virtualization

  1. Container technology has been available for Linux for more than 8 years.

    FreeBSD had jails, Solaris has zones.

    It has been the basis of the original ‘VPS’ (Virtual Private Server) market that existed all that time. You can get a container VPS for as few as less than 2 dollars a month:

    http://lowendbox.com/blog/vpscorner-1-75month-512mb-openvz-vps-in-chisinau-moldova/

    Obviously that is only a single vCPU 512MB of memory and 10 GB of disk space and probably a bit oversubscribed. But it shows that the smallest containers are a lot smaller and cheaper than VMs in the cloud.

    Even though that VPS market has for a large part changed to the use of VMs they can still compete on price with AWS:

    https://www.digitalocean.com/pricing

    What docker is doing is trying to do is package applications up in a standardized way. This is some what similar to what OpenShift: https://www.openshift.com/products/origin has been trying to do by deploying standard applications based on container-technology as well.

    Docker is doing what Google already did and Twitter tried to emulate:
    http://www.wired.com/wiredenterprise/2013/09/docker/all/
    http://www.wired.com/wiredenterprise/2013/03/google-borg-twitter-mesos/all/

    The CoreOS guys are trying to deliver a stripped down fully automated system to run it on:
    http://www.wired.com/wiredenterprise/2013/08/coreos-the-new-linux/

    Docker is not as much about different Linux environments (like the distribution and version), they say VMs aren’t standard, you can not pick them up and drop them in an other environment and run them. VMs they are not as portable as people would want them to be. There are obviously different hypervisors, different network environments and different drivers (especially for Windows) all tied to the environment. The VM is more part of the environment then you’d want it to be. So they are part of the infrastructure, not the application. They are large and heavy in comparison.

    Applications have become more modular, more service orientated. And sometimes running certain services on different versions of operating systems. All this combined makes it very useful to have a separation of concerns. You might even be able to run more containers in production or test than VMs because they can better share memory (maximizing memory utilization, dynamic scaling of VM memory ).

    Application developers just have to deal with their application and exporting port-numbers. Docker allows you to now run your application anywhere, on dedicated servers, private virtualization infrastructure, private cloud, public cloud or a development laptop/desktop running Linux or inside a Linux VM on a Windows or Mac OS X Linux laptop/desktop.

    All in a small package (all the standard Linux parts can be downloaded on installation if it has not been downloaded yet), with just a few general Linux parts as a dependency (and one small binary, less than 250k if I remember correctly). There is a description file which tells the system what to do with it.

    There is even a product do build on that which will have a single set of description files for all services of an application, so for deploying multiple containers:
    https://github.com/toscanini/maestro

    Someone else made a webUI around docker:

    Not bad for a project less than a year old. The smart thing about docker is it only tries to do one things and do it well.

    In a test or production environment there might be many of the same containers, for example web or application servers serving static or dynamic content. You might not need them all on in your development environment, but having multiple of them makes it a lot more like the real environment.

    Instead of running several VMs on your development laptop/desktop, you can run them all on the same VM or even without a VM on Linux. Which means it will run a lot smoother. VMs are heavy in comparison as I mentioned above.

    There is also no boot time for a container, you only start the application parts. You can start 10 times more containers in 10 times less time than a single VM.

    OpenStack already supported the container parts in the Linux kernel, but also supports docker in the next version.

    So that will make it easier to run your own OpenStack installation in the public cloud if you want. OpenStack also has Heat, like AWS CloudWatch, for orchestration.

    I have mentioned this before, I think there is a trend where application developers will include a description file with their application of how their application should be deployed and auto-scaled.

    When standards emerge, thing become more efficient, they become a commodity and prices will go down. It will be interesting to see how this will influence the market.

    When I think about, it seems “webscale” is not only scaling out but now also about scaling down.

  2. Maybe I should have explained what a container on Linux is.

    A VM is booting a new operating system from it’s filesystem stored on an image/’block device’. A container is just the operating system files without the kernel/drivers copied to a directory. Where the host system kernel starts the programs in a seperate namespace.

    That is a seperate network, process and filesystem namespace so it can’t talk directly to the other parts of the system.

    I hope that makes it clearer.

      1. An important part of these applications is to not have state (they don’t contain data).

        In the way cloudy apps are build:

        After seeing the last slide in the first few minutes, you probably want skip to minute 8:44.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: