15 years later: the NAB conference

Over the past few months I’ve been reacquainting myself* with the industry and ecosystem of media production and distribution in preparation for the NAB (Natn’l Assoc. of Broadcasters) event in Las Vegas this week. A lot has changed since I was working in video and television technologies.  Here are a few relevant observations on whaScreen Shot 2017-04-26 at 10.47.44 AMt I’ve seen during this (big) show.

  • Media companies have huge archives of assets. One company I was speaking with has a namespace of media files that exceeds 38 MILLION objects. Media constitutes a staggering amount of data and that amount is growing very rapidly, particularly as media companies build out content in an ever-widening set of formats.
  • Video has become really big. 4K and 8K codecs are making individual files very large and very cumbersome to work with. These large data sets stress out every part of the production environment. Data has “gravity”, making it hard to move, hard to store, hard to recover in the event of a failure. A giant 4K feature-length movie file doesn’t make that any easier!
  • Production environments aren’t really virtualized, but that doesn’t mean they’re in the stone age. Basically, the demands of media/video production are such that purpose-built hardware (think GPUs) and software (think AVID) are the norm. To an enterprise data center guy (like me), a non-virtualized server environment seems bizarre at first, but once you realize the stresses and demands on these systems the non-virtual approach does make sense. Still, I believe there is room for virtualizing a lot of the systems in a production workflow and some significant benefits to be realized in doing so.
  • Media is more like HPC than an enterprise data center. If you had to name a use case with hundreds of small files that are managed through distributed systems and storage in communal namespaces on physical systems, you’d probably say HPC and you’d be right. HPC is about distributed processing, massively parallel work, and fundamentally approaches infrastructure differently than most enterprises. Media is the same way and has many of the same problems. My observation is that there are a lot of parallels to work in things like genomics, EDA, and some big data applications.  Cool stuff.

There are still a few days of NAB left here in Vegas so if you’re around, come by and visit the Pure Storage stand and say hi. If you want to read more about why Pure is at NAB, check out these media and entertainment use cases for Pure Storage:

http://www.purestorage.com/solutions/industries/media.html

 

container

 

* Full disclosure:  I used to manage a team of video, RF, and cable television TMEs at Cisco. I’m a lapsed member of SMPTE and SCTE. It’s been a pretty long while, though.

Advertisements

More on Security in Linux Continers

Jetlag can be brutal. Last week I spoke on a Containers and Virtualization industry panel at Cloud Expo Europe in London and was quoted in Tech Week Europe.

http://www.techweekeurope.co.uk/cloud/containers-virtualisation-adoption-cloud-189833

Clearly I ramble a bit when I haven’t had enough sleep!

The point of this (brief) blog post is to clarify the quote and let those of you who’ve reached out to me regarding the quote know a bit more about my thoughts on this.
“What I think is the case in a VM is that the operating systems themselves which are within the VMs tend to have been in place for a while, they’re big and heavy and take space, but part of that heaviness is some experience with security, and it takes place on that OS level.”

container

I’ve been interested in security since the early 90’s when I was part of the engineering team working on the 3DES systems built into Windows. From that time and my subsequent experience I think there are two macro-level observations that are universally true about these technologies:

  • Maturity matters. Products that have been hardened by exposure to attack are uniformly better than those which are unproven. Security is often a game of whac-a-mole with improvements coming incrementally as attacks and vulnerabilities evolve over time. If your product/system/toolset hasn’t been hardened by time and exposure, it’s probably got a hole you don’t know about. Engineers are the same way – if they’re not experienced thinking about attack vectors and vulnerabilities, they’re probably not going to do a great job, no matter how smart they might be.
  • Security means constant change. The same ever-changing security environment requires technologies that can change and morph to meet new attack vectors, quickly. Rapid patching and repairs are key. Products (and engineering teams) that don’t evolve will eventually lose the security race.

It’s the first of these two items that cause me concern when it comes to containers. As an emerging technology, I don’t believe containers have had enough exposure, aren’t mature enough to know how secure they are. I’m not claiming they’re insecure, just that I don’t trust security when I don’t have a historical record to consult when evaluating it.

By the way, you can read more on this topic if you’re interested at:

shieldWhen developing in a more traditional environment, with a “thick runtime” OS, engineers can often leverage the OS’s security sophistication and the experience of years of exposure. But containers don’t necessarily inherit the security capabilities of a mature OS. If an engineer doesn’t keep this aspect top-of-mind (and doesn’t have experience with security in general) then problems are likely to arise.

But containers have the possibility to be more secure and more mature than a heavy OS environment because of aspect #2 – they can evolve more quickly. Monolithic operating systems take months or years to revise while container tech can evolve much more quickly. Also, a solid, mature security micro-service model might deliver considerable benefits. Developing for a container world will, however, require engineers to be familiar with security, the container aspects of security, and how to include them in their projects. Not every engineer will rise to the occasion.

So, I’m actually cautiously optimistic about container security. I think the future of our world includes containers, other slim runtime environments, and heavy OS’s all together for the foreseeable future. And I think that’s good for everyone – and good for security in the long run.

 

Thoughts on OpenStack Summit Austin

Headed to Austin for the OpenStack Summit? Well, I won’t be there.

Unfortunately, I’ve been called away to an important meeting in Tokyo only a few km from where the last summit was held (oh, the irony!) But, I was lucky enough to have been asked to present an “Intro to OpenStack” session in Europe last week so I’ve been out spreading the gospel. OpenStack is top-of-mind for me this week, too, even though I’m not attending.

Here are some of the topics I’ll be thinking about while you’re all spending time together:

  • Fibre Channel Zone Manager. First introduced in Icehouse, initially only supporting Brocade fabrics, the FCZM has really matured. Supporting Cisco fabrics and VSAN since Juno, Mitaka will include virtual fabric support within the Brocade driver. Fibre Channel is nowhere near as popular as ISCSI with OpenStack, but these developments make it a lot more usable, and I think FC+OpenStack will increase in adoption. As a former Cisco guy who now works in storage, I’m looking forward to hearing more about these types of developments.
  • Glance Image Cache. “Cloning” a bunch of instances has, historically, been resource intensive. The OpenStack community has had to rely on proprietary vendor-specific solutions to accelerate instance creation – until Liberty. Since that release there’s been an image caching feature that can leverage the speed of underlying storage systems to “clone” instances from a cached image. I want to know where this is going and how much value we can extract from purpose-built storage in OpenStack environments
  • Cinder developments. There’s a lot going on, even in more “staid” areas of OpenStack around simplicity, efficiency, and manageability. I am very excited to see the direction that OpenStack is going and how Cinder can contribute. Cinder is a vital part of making OpenStack more consumable by large enterprise organizations, and evolution here helps the project as a whole. Patrick East, a Pure Storage guy who recently joined the Cinder Core team tells me there’s a lot of work to be done and I can’t wait to see it!
  • Support for array-native replication. Now in Mitaka. Works great with my array and I’d love to see what people think of it.

So if you’re at the Austin OpenStack Summit, swing by the Pure Storage stand and say hi to the Pure Storage team for me. They’d be happy to help you understand how we continue to support the foundation and talk about our approach.

And if you’re in Austin, have fun. I’m jealous and wish I were there.

Want to learn more about the Fibre Channel Zone Manager and Glance Image Cache? You can log into the Pure Storage community and read a couple of good papers by Simon Dodsley.

VMware best practices and all-flash storage

Click to register for a vExpert 1:1 at VMworld
Click to register for a vExpert 1:1 at VMworld

It’s a simple fact — storage is one of the primary bottlenecks in any virtualized data center.  It’s slow disk and inconsistent hybrid systems that have been holding back VI performance.  Storage is often the reason why resource-intensive (think large DBs) or aggressive (think VDI) workloads remain either unvirtualized or siloed.

Flash-based shared storage removes the storage bottleneck and helps you get the most out of your virtualization investments.  But to get the absolute most out of ESX, you’ll want to follow some new best practices and configure VMware just a little bit differently.

VMware vSphere Best Practices Guide
VMware vSphere Best Practices Guide

Here at Pure Storage we’ve developed and published a vSphere best practices guide that’s the best place to go to learn how to maximize your return on your storage and virtualization investments.  You should definitely download and read it if you’re going to deploy vSphere on any flash storage, particularly the FlashArray.

We have also collected a number of videos of Pure Storage virtualization gurus and vExperts discussing best practices topics — check these out if you’re in a hurry and want a quick primer on the following videos by clicking the thumbnails.  You can also meet 1:1 with these virtualization experts at VMworld 2015 — if you’re attending, just register for a meeting and you can ask them your questions directly.

Space reclamation (VAAI T10 UNMAP) best practices with Cody Hosterman
Space reclamation (VAAI T10 UNMAP) — Cody Hosterman

End-to-end T10 UNMAP in vSphere 6 with Vaughn Stewart
End-to-end T10 UNMAP in vSphere 6 — Vaughn Stewart

Storage multipathing in vSphere with Craig Waters
Storage multipathing in vSphere — Craig Waters

Virtual Machine Disk Types best practices with
Virtual Machine Disk Types — “Chappy” Chapman

Storage IO Control in vSphere with Vaughn Stewart
Storage IO Control in vSphere — Vaughn Stewart

Common VDI Questions with Kyle Grossmiller
Common VDI Questions — Kyle Grossmiller

Hardware locking (VAAI ATS) best practices with
Hardware locking (VAAI ATS) — “Chappy” Chapman

VAAI XCOPY best practices with Ravi Venkat
VAAI XCOPY — Ravi Venkat

Don’t be a bottleneck — Scaling VDI and the benefits of Flash

Scaling VDI deployments can be a messy business. I’ve worked with IT departments with angry users, productivity issuesMessyScales, and outages — all as a result of poor performance in systems that have been scaled past their ability to perform.  In nearly every case, these are smart teams, staffed with VDI experts, and yet they’re still struck with crippling scalability issues. What gives?

VDI bottlenecks have changed

Issues with scalability change as technology evolves.   Since the dawn of the modern age of VDI (we’re talking VMware Horizon View and Citrix XenDesktop here). the biggest challenge with large scale VDI has been RAM. Many VDI VMs running simultaneously on only a few hosts result in RAM contention and poor VDI performance. In severe cases, swapping to slow disk storage results. The compounded issues with low RAM and slow storage grind large-scale VDI to a halt. Luckily, companies like Cisco (with UCS) and others built mechanisms for adding large amounts of RAM, cheaply, to physical hosts The cost to provide RAM has dropped, amount of available RAM has increased, and this bottleneck is, for the most part, no longer an insurmountable limitation.

The next bottleneck: Slow Storage dont-be-a-bottleneck

Of course, nobody can remove ALL bottlenecks from a system. When you resolve the worst bottleneck, it simply exposes the next worst. In the case of VDI, the next bottle neck is usually slow shared storage. Large physical hosts (with lots of RAM and CPU) connected to slow spinning disk arrays across a low-bandwidth SAN quickly become the limiting factor in VDI scalability. Even the most powerful of spinning disk arrays fails to keep up with boot storms, VM recompose cycles, and performance VDI users. The biggest symptoms of slow storage are long wait times, slow boot times, and general desktop latency. Users hate these! Imagine waiting 10 minutes to boot your desktop then seconds or minutes after clicking on an icon for the application to load – not just frustrating, but also costly in terms of person-hours and productivity.

Things Get Easier with…  the Power of Flash!

With high performance, low-latency, high-bandwidth all-flash Pure Storage systems, the storage bottleneck to VDI scalability is removed. The problem of slow, unresponsive SAN storage is simply eliminated. With all-flash storage, boot storms and other surges in storage consumption rates over the course of a day or week are easily handled without performance degradation or other user impacts. And day-to-day common tasks (boot-up, application access, file saving, graphics-using) become faster and smoother, often providing a better experience than a physical desktop due to the extensive data center resources available to users. All that flash performance enables scalability to previously unheard of levels on a single infrastructure instance. In fact, the existing best practice of scaling a single instance to no more than 5000 desktops is, in my opinion, no longer necessary. The power of modern all-flash storage enables a much higher consolidation ratio, even with a conservative approach to desktop failure domains and high availability.

Not All Flash is Created Equal

But not every all-flash storage system is equal in the context of VDI. Consistent low-latency high performance is everything to VDI users – it defines their experience – but not every all-flash array can provide it. In particular, concerns with outages due to disruptive maintenance are an issue. Disruptive scaling is also of particular concern! Many existing all-flash systems require downtime to add either capacity or performance (simply unacceptable!) Not so with Pure Storage. Our systems maintain 100% performance under load, during failure conditions, and during upgrades. So you can add VMs with confidence to the limits of the system you have and know that adding more means a simple, non-disruptive capacity upgrade if you have to get more storage for those VMs.

To learn more about Pure Storage and Virtual Desktop Infrastructure, please visit the following:

Learning more about Virtualization and All-Flash Storage

Earlier this year VMware announced the release of Horizon 6, the new software suite that includes VMware View. Since that time I’ve been doing a number of webinars for VMware and Pure customers that explain Horizon View and the benefits that Pure provides in virtual server and virtual desktop environments.

Check out my webinar with Sachin Sharma of VMware discussing Horizon 6 on Pure

The immediate benefits of Pure Storage for VDI are clear and easy to understand: all-flash storage removes the biggest performance bottleneck in VDI systems. Users actually love VDI desktops when the experience is consistent and high-performing. Pure delivers the storage that makes great VDI possible.

Of course, there are now lots of vendors beyond Pure that promise great VDI performance. Some of them (definitely not all!) can actually deliver that performance today.

But speed isn’t all you need for VDI. You also need reliability, scalability, and a reasonable price point. To date, only Pure really delivers on all these features while providing the excellent experience that comes from all-flash arrays.

If you want to learn more, definitely visit Pure’s webinar page and sign up for one of my upcoming sessions.

Virtualization Field Day 3 — Pure Storage

Here at Pure Storage we have the privilege of hosting the Virtualization Field Day 3 team last week.  It was a great event, despite the vicious nerf battle that occurred, and the VFD3 guys were good enough to film and stream it.  Here’s the link to the videos which include a great discussion by Vaughn Stewart and an appearance by me as the demo guy.

http://techfieldday.com/appearance/pure-storage-presents-at-virtualization-field-day-3/

Image

Pure Storage and the people who work there

Those of you who follow me via social media or other means may have noticed that I’ve “gone orange”.

 

Yes, as of January 2014 I’ve joined Pure Storage to work on Virtualization and Cloud solutions. It’s an exciting role at an exciting company.  I’m looking forward to the ride!

Check out this video we filmed a few weeks ago.  In it Neil Vashharajani (@nvachhar) explains to me the magic of Pure Storage and flash.