Jetlag can be brutal. Last week I spoke on a Containers and Virtualization industry panel at Cloud Expo Europe in London and was quoted in Tech Week Europe.
Clearly I ramble a bit when I haven’t had enough sleep!
The point of this (brief) blog post is to clarify the quote and let those of you who’ve reached out to me regarding the quote know a bit more about my thoughts on this.
“What I think is the case in a VM is that the operating systems themselves which are within the VMs tend to have been in place for a while, they’re big and heavy and take space, but part of that heaviness is some experience with security, and it takes place on that OS level.”
I’ve been interested in security since the early 90’s when I was part of the engineering team working on the 3DES systems built into Windows. From that time and my subsequent experience I think there are two macro-level observations that are universally true about these technologies:
- Maturity matters. Products that have been hardened by exposure to attack are uniformly better than those which are unproven. Security is often a game of whac-a-mole with improvements coming incrementally as attacks and vulnerabilities evolve over time. If your product/system/toolset hasn’t been hardened by time and exposure, it’s probably got a hole you don’t know about. Engineers are the same way – if they’re not experienced thinking about attack vectors and vulnerabilities, they’re probably not going to do a great job, no matter how smart they might be.
- Security means constant change. The same ever-changing security environment requires technologies that can change and morph to meet new attack vectors, quickly. Rapid patching and repairs are key. Products (and engineering teams) that don’t evolve will eventually lose the security race.
It’s the first of these two items that cause me concern when it comes to containers. As an emerging technology, I don’t believe containers have had enough exposure, aren’t mature enough to know how secure they are. I’m not claiming they’re insecure, just that I don’t trust security when I don’t have a historical record to consult when evaluating it.
By the way, you can read more on this topic if you’re interested at:
- Tech Target: Linux container virtualization is on the evolutionary fast track
- NCC: Understanding and hardening Linux containers
When developing in a more traditional environment, with a “thick runtime” OS, engineers can often leverage the OS’s security sophistication and the experience of years of exposure. But containers don’t necessarily inherit the security capabilities of a mature OS. If an engineer doesn’t keep this aspect top-of-mind (and doesn’t have experience with security in general) then problems are likely to arise.
But containers have the possibility to be more secure and more mature than a heavy OS environment because of aspect #2 – they can evolve more quickly. Monolithic operating systems take months or years to revise while container tech can evolve much more quickly. Also, a solid, mature security micro-service model might deliver considerable benefits. Developing for a container world will, however, require engineers to be familiar with security, the container aspects of security, and how to include them in their projects. Not every engineer will rise to the occasion.
So, I’m actually cautiously optimistic about container security. I think the future of our world includes containers, other slim runtime environments, and heavy OS’s all together for the foreseeable future. And I think that’s good for everyone – and good for security in the long run.