A Leap into Learning Kubernetes

So, it’s been awhile, right? I opened my big mouth and put it out there that I’d do a video series installing Kubernetes (K8s) and OpenFaaS on 3-Node Intel NUC cluster. Of course, I’ve never installed K8s and other than knowing Alex Ellis, the leader of the open source project for OpenFaaS, I don’t know the first thing about running functions on containers. Oh, did I forget to mention, other than installing Linux, I really shouldn’t be left to my own devices with a Linux console. On top of all that, I’m pretty rusty touching deep technology.

With all that said, why in the world did I commit to doing such an ambitious project before investigating the level of challenge?

Because if I would have looked before I leaped, I would have never committed to doing it. I do these types of naive things to force myself outside of my comfort zone. At the end of the project, I’ll have a better perspective on the challenges of modernizing not only an IT infrastructure but the effort a team needs to undertake to understand such a drastic shift in technology.

How is it going?

It’s near the end of day-1. I’ve watched some Alex’ YouTube video on installing K8s. It’s based on his super popular blog post on the same. I also watched a couple of hours of Anthony Nocentio’s Pluralsight course Kubernetes Installation and Configuration Fundamentals. So far it’s an excellent course.

What I am having trouble with is getting my bearings on what I don’t know. I have over 20-years of experience in IT. I like to anchor learning new things on other topics I’ve mastered over the years. The best comparison I have is VMware vSphere. Installing vSphere is super easy. Managing vSphere is something completely different. With most technologies, I get a sense of what needs to be done to add value.

K8s is a different beast for me. Unlike vSphere, K8s is focused on the developer experience. My years of training helped me understand what I wanted out of vSphere and therefore what I needed to learn. K8s isn’t geared toward my day-to-day IT experience. I, therefore, have a difficult time grasping the vastness of the platform.

Conceptually, I get it. It’s not very difficult to understand the high-level value, use cases, or integrations. I can speak to executives and help them understand directionally how to approach it. It’s at the operator level that this is all new learning. As the saying goes, the devil is in the details.

I am having fun stepping out of my comfort zone and in a sense going back into the comfort zone of learning and touching and teaching new technology.

Make sure to subscribe to the YouTube channel as I share the resulting content. The videos themselves are geared to other people just like me. I’ll show all the things I had to learn along the way.

The Sphere – Mission Accomplished w/Rebecca Fitzhugh

Rubrik Technical Marketing Engineer Rebecca Fitzhugh joins the Sphere. Rebecca shares her experience starting her career as a 17-year old in the military to running her consultancy and eventually joining Rubrik. Rebecca has a unique perspective as someone willing to mop the floor or design a data to accomplish the mission while finding enjoyment along the way.

Running a Business as a Couple – John & Kat Troyer

John and Kat Troyer share their experience of running a business as both business partners and a married couple. In this raw conversation, Keith and Mark probe for input on topics ranging from money, stress, and trust. Thanks to John and Kat for a very honest conversation. You can find more about their businesses on https://techreckoning.com

Quietly Judging with Amy Lewis – VirtualizedGeek Podcast

Welcome back to podcasting on the VirtualizedGeek. We’ve rebooted the channel to a career-focused podcast. In this first return episode, Amy Lewis of the Geek Whisperers joins the podcast to talk about her transformation from working in publishing to a director at a large enterprise IT software company.

Show Notes

04:00 – Yes and? Amy’s first transformation

05:00 – Posterchild for women in tech (CloudCast Podcast Episode)

09:00 – Getting that full-time or getting fired

13:30 – Why pivot a 2nd time?

17:00 – The value of mentorship

22:30 – The importance of doing what drives you




Subscribe: Apple Podcasts | Android | RSS

Introduction to Serverless & Webinar

First off, there are servers in serverless computing. However, the construct of servers is a throwback to a time when the only way to develop distributed applications was to focus on the server as the base component of the infrastructure. In the server-centric model of distributed systems, developers needed to understand their CPU, Memory and OS requirements. An argument could be made that coders only need to concern themselves with code. That’s the point of serverless.

Public Cloud Market

The goal of serverless services such as Lambda, Azure Functions, and Google Compute Cloud Functions is the abstract away the infrastructure. Application developers write code that resides in repositories within these services. An event such as a writing a file to an object store or SQL query initiates the stored function.

<div id="block-yui_3_17_2_1_1508870545250_18008" class="sqs-block video-block sqs-block-video" style="position: relative; height: auto; padding: 17px; outline: none; box-shadow: rgba(128, 128, 128, 0) 0px 0px 0px 1px inset; transition: box-shadow 0.2s ease-in-out; clear: both; color: #424242; font-family: proxima-nova; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;" data-block-json="{"layout":"caption-hidden","overlay":false,"description":{"html":"Subscribe to all things CTO Advisor – https://www.thectoadvisor.com/subscribe Of course there are servers in serverless computing. So why call it serverless? I walk through the concept using AWS Lambda service and S3 integration as an example."},"hSize":null,"floatDir":null,"html":"“,”url”:”https://www.youtube.com/watch?v=iQzd7TkKlnI&t=1s&#8221;,”width”:854,”height”:480,”providerName”:”YouTube”,”thumbnailUrl”:”https://i.ytimg.com/vi/iQzd7TkKlnI/hqdefault.jpg&#8221;,”resolvedBy”:”youtube”}” data-block-type=”32″>

<div id="yui_3_17_2_1_1509333360072_89" class="sqs-video-wrapper video-none" style="position: absolute; top: 0px; left: 0px; width: 750px; height: 421.539px;" data-html="” data-provider-name=”YouTube”>Outside of offering a simplified abstraction, serverless provides the ability to reduce cost by increasing efficiency of running code. Some processes only run as a result of an event. For this reason, serverless is sometimes referred to event-driven computing or Functions as a Service (FaaS).

Register to learn more

An example of a different type of computing is data services. Take a database as an example. Databases must serve as permanent storage for services such as small business payroll processing. While the payroll process may only run once a week, the database hosting the data must run consistently to serve other dependent systems such as time tracking systems.

Reduce Costs

In server-centric computing, organizations may dedicate entire server instances to waiting on an event to trigger a function. In the era or cloud that wait time translates to wasted cost. The serverless code only consumes computing resources once called upon. Therefore, services such as Lambda optimize spend. Some customers my eliminate entire EC2 instances and primarily receive compute for free up to 1 million requests a month.

There are drawbacks to this service. You can’t build an entire enterprise application only event-driven functions. You must have persistent storage for example. Another consideration is security. Since there is no network address associated with a Lambda instance, there’s no endpoint to filter on a firewall. Lambda and the other services noted above are only available in each cloud providers infrastructure. The services are also not interoperable.

Learn More

If you want to learn more about serverless and the options for on-premises solutions that integrate with your organization’s security and development strategy, join me for a webinar on the topic. The webinar will be October 31st, 2017, at 11:00 CT or 12:00 ET. Register here.

AWS S3 Encryption Options

I’ve been playing around with AWS security and as an output, I’ve gotten up to speed with their S3 encryption options. I thought I’d do a quick hit to share what I learned. AWS offers two high-level options for encryption – Server Side Encryption (SSE) and Client Side Encryption. SSE is encryption handled by AWS. As it hints in the name client side encryption is dealt with by the consumer of S3 storage. It’s important to note that it’s the consumer of S3 storage. So, an EC2 instance performing encryption is considered client side encryption.

Key Management
Most of the options around SSE encryption surround options around key management. For SSE encryption, there are two major components, the Customer Master Key (CMK) and the object encryption key. In a traditional encryption scheme, you compare the CMK to a private key. The big difference is that CMK is limited to the ability to encrypt 4 kilobytes of data. So, the CMK is used to encrypt data keys which in turn are used to encrypt data. AWS offers three models around CMK management.

SSE-S3 – This is the simple option. AWS manages the CMK. The customer doesn’t know the CMK and doesn’t control access to the CMK. S3 ACLs are used to determine who can decrypt data upon access. SSE-S3 is appropriate for those needing to check the box of encryption of data at rest. I could see limited use cases for SSE-S3.

SSE-KMS – The KMS option allows the customer more control over the CMK. Customers create one of more CMKs via AWS IAM (Identity and Access Management) and control what users or AWS roles use the CMK. Rules are granular to the point. Administrators have the ability to determine which users can encrypt or decrypt data using the CMK. The CMK can also be used for communication beyond encryption. It’s a private key so any application that leverages key exchanges can use KMS. The disadvantage is that you are still tied to AWS.

SSE-C – This option is for customers who desire to control key management directly but leverage server-side encryption. The concept is that you want AWS to perform the encryption, but you want complete control over key management.

Client-side Encryption
Client-side encryption has two options. The first is to leverage AWS KMS to generate manage keys for encryption on the client before upload to AWS. This is useful if you want to use an encryption algorithm other than AES-256 used by S3. I’m sure there are other use cases I haven’t considered.

The second option is customer side key management and encryption. AWS is completely out of the picture. It’s important to highlight that with SSE-C and client-side encryption with customer-managed keys, AWS can’t recover your key/data if you lose your keys. Customers are taking complete responsibility for key management. So, you get the complete liability and capability associated with key management.

Understanding Ubuntu (BASH) on Windows 10

Microsoft has more than warmed up to Linux. The company which once was at war with open source has created a whole new subsystem to allow running Ubuntu’s Executable and Linkable Format (ELF). Microsoft’s Channel 9 has a detailed video on the details of the beta project. I created the below video to help introduce the concept and clarify the difference between Ubuntu on Windows 10 and a VM. Some interesting questions have been around why this is only available on Windows 10 vs. Windows Server 2016 and other flavors of the Windows. The tool is aimed at developers to provide a wide range of tools and development environments. Ubuntu on Windows allows developers to use both Windows and Linux based compilers and command-line tools. I’m sure as it matures Microsoft will make it available for Windows Server 2016. As for other desktop flavors on Windows, I’m sure Microsoft is using this as a carrot to get more developers to upgrade to Windows 10.

CTO Advisor 027 – VSAN 6.2 What you need to know

Wikibon has predicted that Server SAN will overtake traditional storage arrays. The latest version of VSAN 6.2 moves a step closer to displacing legacy storage arrays. I’m joined by VMware Principle Architect Rawlinson Rivera (@punchinglcloud). We hit some critical questions about the latest version. Here’s a short list of topics we touched upon.

Click Here to listen 

Subscribe iTunes | RSS

My failed attempt at installing Azure Stack in vCloud Air

Failed labs are a cost of doing business in learning new stuff. Microsoft’s tech preview for Azure Stack created a new opportunity for my lab efforts. It’s rare when I have a lab that’s too big to run in either my home VMware Workstation setup with 32GB of RAM or the 1000 vCPU hours I get monthly with Ravello Systems. The Azure Stack is a big lab. It requires 96GB of RAM and 16 CPU cores. It’s larger than any single host that I’ve needed to create. In this post, I document how I came to try to use vCloud Air and how I eventually gave up.

My first thought was to go with a bare metal cloud service. The obvious choice is Baremetalcloud.com, which I’ve used before. My hesitation is that the biggest single node is 96GB. I’ve already read that the hardware requirements for Azure Stack were strict. Jon Hildebrand had already had some frustrating experiences with bare metal in his lab. I didn’t want to waste the money.

My next option was Ravello Systems. I already have free hours but Ravello Systems doesn’t support Hyper-V. Obviously, Hyper-V is required for Azure Stack. Then I remembered that I have $300 in vCloud Air credit that I’ve never used. The current version of vCloud Air, based on vSphere 5.5 is technically capable of running Hyper-V.

Related post: Read my thoughts on Azure Stack

The Install

My initial challenges were around Windows Server 2016. I initially tried to start with a Windows 2012 image. I quickly discovered that pre-built OS images weren’t going to work. The primary challenge is the size of the system drive. It needed to be a minimum of 200GB. If I were going to get the lab to work, I’d need to create an OS from scratch.

My first thought was to leverage the latest versions of VMware Fusion or VMware Workstation. Both products support direct upload to vCloud Air. After building the initial OS image, I attempted to upload them via the each platform. Both just flat out failed. I didn’t want to attempt to export to OVF so; I set my sites on uploading the Windows Server 2016 ISO to my vCloud Catalog. I had challenges bug I eventually figured out the problem which I documented.

Once I had the ISO uploaded, I had no problems creating a custom image. VMware Fusion even came in handy. The mouse doesn’t work in the console for Windows Server 2016 until after you install VMware tools. Alternatively, I was able to use Fusion to remote to my installation.

Fast forwarding to the Azure Stack install, I discovered that Hyper-V wasn’t starting. This wasn’t obvious. My installation script didn’t error out. The script just remained on task 8 of 124. Thanks to help from Jon, I was able to determine Hyper-V wasn’t starting. I knew exactly what the problem was as I encountered it way back in VMware Workstation 8 when I ran a nested Windows 8 VM.

The problem is that vCloud Air doesn’t expose the .VMX file in any obvious way. My solution? Go back to my VMware Workstation image and export it to an OVF and upload it. Based on my previous experience with VMware Workstation upload and the OVFTOOL for uploading the image, I dreaded the task. I swallowed my pride and gave VMware Workstation one more try. It didn’t work.

So, I went back to the OVFTOOL. The upload would start and fail between 1 and 5 percent completion. William Lam had directions for logging errors, but the tool didn’t generate a log file. After attempting to upload the OVF about ten times, I just finally gave up.

Like every lab, I learned a ton. I received more exposure to vCloud Air than ever before. I have about as much hands on keyboard time with vCloud as I do AWS. There’s some stuff to like about both.

I don’t know what much else to say. I’m a beaten man.

%d bloggers like this: