I just wrote a quick blog about the Cisco UCS M5 announcement over at the Pure Storage corporate blogsite… check it out to learn more about the next generation of UCS.
At Pure //Accelerate in San Francisco I was lucky enough to be joined by Craig Waters to discuss how FlashStack enables massive consolidation of workloads — through density, reliability, performance, and granular scale.
That session is now available for you to review, alongside a bunch of other awesome sessions you might enjoy. So if you missed //Accelerate or just want to learn more about FlashStack in consolidated environments, check it out!
For me, //Accelerate provides the opportunity to hear from industry experts and customers as they share their experience and knowledge. Here are some of my favorites from this year:
If you’re attending Pure’s Accelerate conference next week in San Francisco, and you’re interested in learning more about converged, hyper-converged, and integrated systems… well, you’re in luck. There are a ton of sessions, talks, and discussions planned on the topic. Here I’ll just list a few, focused on the FlashStack solution from Cisco and Pure Storage:
- Keynote address by Liz Centoni of Cisco. Tuesday 9:30-11:00am Liz is the Senior VP and General Manager of the Computing Systems Product Group at Cisco — the group that designs and builds Cisco UCS, an integral part of every FlashStack and the most commonly used compute platform for converged infrastructure today. She’ll be providing some insights into the future of the data center and cloud.
- Breakout session “IT Transformation: FlashStack enabling Next generation Data Center for Agility and Versatility”. Tuesday at 11:00, Room 1. You’ll hear experts from Cisco and Pure Storage talk about FlashStack and what sets it apart from other converged infrastructure solutions.
- Breakout session “Application Consolidation with FlashStack Converged Infrastructure”. Tuesday at 2:00, Room 1. This session discusses necessary aspects of infrastructure for massive consolidation — and some new features from Pure that will help.
- Breakout session “Converged Infrastructure: Private Cloud for the IT Generalist”. Tuesday at 11:00, Room 3 and Wednesday at 3:00, Room 3. Hear Vaughn Stewart and Howard Marks discuss CI and HCI and how they compare financially and technically. Very useful if you’re considering converged and what to know which is best for you.
- Breakout session “Maximizing FlashStack Data Management with Oracle and VMware”. Wednesday at 12:00, Room 1. Experts from Commvault will share insights into data management on FlashStack and the recent Cisco Validated Design with Commvault, Cisco, and Pure Storage.
- Breakout session “Large Scale VDI with FlashStack”. Wednesday at 4:00, Room 4. Learn how FlashStack scales to accommodate 5000 or more desktops in a single configuration — with industry leading simplicity and efficiency.
Visit us also in the Cloud pavilion, the Basecamp pavilion, and elsewhere for shorter sessions on FlashStack — including how to automate and orchestrate a private cloud on FlashStack with Cisco UCS-Director, Virtual Desktops on FlashStack, and a whole lot more.
In addition to physical and virtual FlashStack systems scattered around the event, Cisco will also have a large stand in our Partner area – stop by and ask them about FlashStack’s integration with Cisco CloudCenter!
And you can tweet at me during the event with an unusually high chance that I’ll respond quickly due to the excitement around the event — @joelmckelvey — I can help you find FlashStack experts if you need any questions answered! Experts will be standing by the whole time, use hashtag #ASKPURE if you need anything!
I’m very pleased to announce that FlashStack from Cisco and Pure Storage was a CRN “Best of Show” finalist for the recent Citrix Synergy event in Orlando. Our friends at Cisco won the final prize for their ACI+NetScaler solution, but I think I can speak for Cisco when I say we’re all very proud that a young solution like FlashStack has managed to make such an impact on the world of VDI.
The solution that won the finalist position was the Cisco Validated Design (CVD) “FlashStack with Cisco UCS and Pure Storage FlashArray//M for 5000 Citrix Xen Desktop Users” and details how to configure a comprehensive, modern, flexible data center design for Citrix XenDesktop and XenApp at scale. This CVD documents FlashStack with VMware ESXi 6.0 running Citrix XenDesktop 7.9 with 5000 Windows 10 users – with both persistent and non-persistent desktops running together under load. If you want to read more about the testing and results, check it out directly on the Cisco website here: 5000 user Citrix XenDesktop CVD. My colleague Kyle blogged about it when it came out.
There are a bunch of solutions for Citrix XenDesktop out there, but what’s different about this CVD is the scale. Each details the use of VDI for 5000 or more simultaneous users, and discusses how to scale beyond 5000 to even larger VDI deployments. Scale is key, because VDI and virtualized apps are increasingly found in companies of considerable size as they struggle to deal with the costs, compliance concerns, backup and recover, and security issues associated with large numbers of physical devices.
One key feature of FlashStack is its ability to scale granularly, in various independent dimensions, without disruption.
- Granular scale: With FlashStack you can scale up two hosts at a time, up to 160 total hosts. You can grow your storage a few TB at time, too, and expand storage performance in small increments.
- Independent scale: Scale compute power, network bandwidth, storage capacity, or storage performance independently. No need to scale all at once, you can choose where to grow your FlashStack based on your needs and keep everything simple.
- Non-disruptive scale: When you need more hosts, just plug them into the UCS chassis and UCS-Manager will recognize them. When you need more storage you can just add modules to the Pure Storag FlashArray. There’s no need to take existing desktops offline just to add more!
FlashStack gives you more than just scale, it’s also highly reliable and provides consistent high performance. With FlashStack your VDI infrastructure can easily handle not just boot storms, but also recompose or virus scan “storms” that come hand-in-hand with keeping your desktops updated and compliant. And to do this at scale is a major feat.
Sidenote: I’ve been lucky enough to work on a couple of amusing videos for the Citrix Spotlight video competition… I think they’re worth a watch!
In a few weeks I’ll be presenting a session on workload consolidation on FlashStack infrastructure at Pure Storage’s annual //Accelerate user conference. I realize that consolidation isn’t the most exciting topic to discuss… ever since VMware became robust enough for true production deployment the consolidation of workloads has been commonplace. But there remain a HUGE number of workloads, even virtual workloads, that are running in silos or isolated islands of infrastructure. I believe there is very little reason for any workload not to be consolidated and that technology has evolved enough to make fully consolidated data centers a reality.
If you haven’t thought about it in a while, here are three new reasons to think again about consolidation — of every workload:
The least exciting change in consolidated data centers is also one of the most major – the dramatic improvement in scalability. Case and point: storage capacity and density. While spinning disk dominated the enterprise storage scene improvements in capacity and density were gradual. With all-flash storage we have entered a new era of storage where a Petabyte or more can fit in 3 rack units. And the price of all-flash storage is lower than ever, too.
What this means for data centers everywhere is that benefits of having more than one platform for storage are now dramatically outweighed by the benefits (of both purchase and administration) of having only a single data platform.
Recent work I did with ESG group on workload consolidation on Pure Storage focused on simulating an active 5000-employee company with virtual desktops, SQL server, Exchange, Sharepoint, etc. In our test scenarios, all these workloads performed flawlessly on a single array – and while the flawless operation was the point of our testing I was struck by how small a storage system was needed to operate this whole company. And storage is only getting better.
Nobody wants to manage a million different devices in the data center. Consolidation onto smaller numbers of physical or logical devices results in some simplicity gains… but how the workloads interact can add considerable complexity if not handled correctly. In the storage world we try to minimize the additive complexity of consolidation by providing a simple, easy interface for management and troubleshooting. Vendors like Pure Storage continue to set a high standard for usability.
A good example of a tool to support consolidated workloads is storage QoS. Any given workload on an array without QoS runs the risk of monopolizing array resources and interfering with the operation of other workloads. These “noisy neighbor” workloads have historically been handled using an extremely complicated group of QoS settings that needed regular readjustment. The result is a steep learning curve and a lot of wasted administration time.
With Purity always-on QoS a “noisy neighbor” workload is automatically dampened when array resources become scarce. There’s no direct management required and no settings to adjust. And, because it’s so simple and always-on, consolidated workloads just work – without fuss or tuning. This type of implementation, where key features are totally effortless to use, is the new standard for storage and other data center infrastructure.
Look, when you consolidate workloads you also consolidate risk. With all your workloads in one place, an outage has a major impact. In fact, the impact of an outage is directly proportional to the level of consolidation!
Mitigating this risk in most storage systems involves tiers of storage, each with its own backup policy, RAID configuration, etc. All these configurations take time and effort to set, maintain, and update. If you are managing every array as though it were 20 smaller arrays each with its own settings, you’re not making things easier and you’re probably going to see some confusion and downtime. Making things complicated on the inside doesn’t improve availability…
If you choose, your Pure Storage FlashArray will regularly “phones home” to Pure with telemetrics to help identify and resolve problems before you even see them (around 80% of all trouble tickets are opened by Pure before the array owners even notice anything). But the arrays also report their uptime. By analyzing this telemetric data, we were able to determine that the Pure Storage FlashArray has achieved 99.9999% availability (6 nines!) in real production environments. This includes not just unplanned downtime (something went wrong) but also planned downtime (virtually eliminated in the FlashArray). This is a system you can trust to run consolidated workloads and to STAY ONLINE so your workloads do, too.
- Now is the time for consolidation
So it’s never been a better time for workload consolidation, at least from a technological viewpoint. The technology has evolved to provide you the scalability, stability, and simplicity you need to make consolidation a reality. Seriously, take a look at those workloads you haven’t consolidated yet and ask yourself — why?
I hope you’ll join me in San Francisco on June 12th for the //Accelerate conference. I’d be really excited if you had some consolidation questions you’d like to ask me during my session! You can always reach me on Twitter if you want questions answered: @joelmckelvey
This week I’m in Boston at OpenStack Summit talking with the increasingly large number of IT departments who are looking to open source approaches to building their clouds. This includes a bunch of large service providers, some telcos, and a surprising number of smaller players. Pure Storage is a Corporate Sponsor of the OpenStack Foundation (and has been since 2014) and I’ve been attending these summits since 2013 (Hong Kong). Things have really changed since that time out on Lantau island…
The OpenStack community has evolved. It’s increasingly results-oriented, by which I mean business results, not technical results. This is a good thing. It means OpenStack users are increasingly looking for production results for production deployments. Also, it’s not just large companies that use OpenStack. Indeed, the April OpenStack users survey shows a nice mix of small and large companies:
OpenStack doesn’t exist alone, of course, and there’s increased interest in technologies like containers, IoT, and hybrid solutions.
These charts are cribbed from the OpenStack user survey, by the way. If you want to read more of the user survey, which is very interesting, check it out here:
If you believe the OpenStack community (and they’re pretty reliable on these things) the future is full of diverse architectures and is focused around multi-cloud strategies. I tend to agree. It’s not hard to think of a company that has a private cloud environment for production systems, a second private cloud for development resources, and a public cloud implementation to drive SaaS-like services to customers. I believe many more companies will adopt this multi-cloud approach going forward and that multi-cloud adoption rates will ramp up quickly.
One last thing — Congratulations to Paddy Power Betfair and UK Cloud – winners of the OpenStack Superuser Award at the Boston summit. I was struck by an architecture slide they shared for one of their newer deployments. See anything familiar down towards the bottom?
Small and Mighty! Check out the Paddy Power Betfair and UK Cloud Superuser win details at:
Oh, and if you haven’t seen what Pure Storage has to offer when it comes to OpenStack – including details of our contributions and sponsorship – check out
Over the past few months I’ve been reacquainting myself* with the industry and ecosystem of media production and distribution in preparation for the NAB (Natn’l Assoc. of Broadcasters) event in Las Vegas this week. A lot has changed since I was working in video and television technologies. Here are a few relevant observations on what I’ve seen during this (big) show.
- Media companies have huge archives of assets. One company I was speaking with has a namespace of media files that exceeds 38 MILLION objects. Media constitutes a staggering amount of data and that amount is growing very rapidly, particularly as media companies build out content in an ever-widening set of formats.
- Video has become really big. 4K and 8K codecs are making individual files very large and very cumbersome to work with. These large data sets stress out every part of the production environment. Data has “gravity”, making it hard to move, hard to store, hard to recover in the event of a failure. A giant 4K feature-length movie file doesn’t make that any easier!
- Production environments aren’t really virtualized, but that doesn’t mean they’re in the stone age. Basically, the demands of media/video production are such that purpose-built hardware (think GPUs) and software (think AVID) are the norm. To an enterprise data center guy (like me), a non-virtualized server environment seems bizarre at first, but once you realize the stresses and demands on these systems the non-virtual approach does make sense. Still, I believe there is room for virtualizing a lot of the systems in a production workflow and some significant benefits to be realized in doing so.
- Media is more like HPC than an enterprise data center. If you had to name a use case with hundreds of small files that are managed through distributed systems and storage in communal namespaces on physical systems, you’d probably say HPC and you’d be right. HPC is about distributed processing, massively parallel work, and fundamentally approaches infrastructure differently than most enterprises. Media is the same way and has many of the same problems. My observation is that there are a lot of parallels to work in things like genomics, EDA, and some big data applications. Cool stuff.
There are still a few days of NAB left here in Vegas so if you’re around, come by and visit the Pure Storage stand and say hi. If you want to read more about why Pure is at NAB, check out these media and entertainment use cases for Pure Storage:
* Full disclosure: I used to manage a team of video, RF, and cable television TMEs at Cisco. I’m a lapsed member of SMPTE and SCTE. It’s been a pretty long while, though.
Jetlag can be brutal. Last week I spoke on a Containers and Virtualization industry panel at Cloud Expo Europe in London and was quoted in Tech Week Europe.
Clearly I ramble a bit when I haven’t had enough sleep!
The point of this (brief) blog post is to clarify the quote and let those of you who’ve reached out to me regarding the quote know a bit more about my thoughts on this.
“What I think is the case in a VM is that the operating systems themselves which are within the VMs tend to have been in place for a while, they’re big and heavy and take space, but part of that heaviness is some experience with security, and it takes place on that OS level.”
I’ve been interested in security since the early 90’s when I was part of the engineering team working on the 3DES systems built into Windows. From that time and my subsequent experience I think there are two macro-level observations that are universally true about these technologies:
- Maturity matters. Products that have been hardened by exposure to attack are uniformly better than those which are unproven. Security is often a game of whac-a-mole with improvements coming incrementally as attacks and vulnerabilities evolve over time. If your product/system/toolset hasn’t been hardened by time and exposure, it’s probably got a hole you don’t know about. Engineers are the same way – if they’re not experienced thinking about attack vectors and vulnerabilities, they’re probably not going to do a great job, no matter how smart they might be.
- Security means constant change. The same ever-changing security environment requires technologies that can change and morph to meet new attack vectors, quickly. Rapid patching and repairs are key. Products (and engineering teams) that don’t evolve will eventually lose the security race.
It’s the first of these two items that cause me concern when it comes to containers. As an emerging technology, I don’t believe containers have had enough exposure, aren’t mature enough to know how secure they are. I’m not claiming they’re insecure, just that I don’t trust security when I don’t have a historical record to consult when evaluating it.
By the way, you can read more on this topic if you’re interested at:
- Tech Target: Linux container virtualization is on the evolutionary fast track
- NCC: Understanding and hardening Linux containers
When developing in a more traditional environment, with a “thick runtime” OS, engineers can often leverage the OS’s security sophistication and the experience of years of exposure. But containers don’t necessarily inherit the security capabilities of a mature OS. If an engineer doesn’t keep this aspect top-of-mind (and doesn’t have experience with security in general) then problems are likely to arise.
But containers have the possibility to be more secure and more mature than a heavy OS environment because of aspect #2 – they can evolve more quickly. Monolithic operating systems take months or years to revise while container tech can evolve much more quickly. Also, a solid, mature security micro-service model might deliver considerable benefits. Developing for a container world will, however, require engineers to be familiar with security, the container aspects of security, and how to include them in their projects. Not every engineer will rise to the occasion.
So, I’m actually cautiously optimistic about container security. I think the future of our world includes containers, other slim runtime environments, and heavy OS’s all together for the foreseeable future. And I think that’s good for everyone – and good for security in the long run.
Headed to Austin for the OpenStack Summit? Well, I won’t be there.
Unfortunately, I’ve been called away to an important meeting in Tokyo only a few km from where the last summit was held (oh, the irony!) But, I was lucky enough to have been asked to present an “Intro to OpenStack” session in Europe last week so I’ve been out spreading the gospel. OpenStack is top-of-mind for me this week, too, even though I’m not attending.
Here are some of the topics I’ll be thinking about while you’re all spending time together:
- Fibre Channel Zone Manager. First introduced in Icehouse, initially only supporting Brocade fabrics, the FCZM has really matured. Supporting Cisco fabrics and VSAN since Juno, Mitaka will include virtual fabric support within the Brocade driver. Fibre Channel is nowhere near as popular as ISCSI with OpenStack, but these developments make it a lot more usable, and I think FC+OpenStack will increase in adoption. As a former Cisco guy who now works in storage, I’m looking forward to hearing more about these types of developments.
- Glance Image Cache. “Cloning” a bunch of instances has, historically, been resource intensive. The OpenStack community has had to rely on proprietary vendor-specific solutions to accelerate instance creation – until Liberty. Since that release there’s been an image caching feature that can leverage the speed of underlying storage systems to “clone” instances from a cached image. I want to know where this is going and how much value we can extract from purpose-built storage in OpenStack environments
- Cinder developments. There’s a lot going on, even in more “staid” areas of OpenStack around simplicity, efficiency, and manageability. I am very excited to see the direction that OpenStack is going and how Cinder can contribute. Cinder is a vital part of making OpenStack more consumable by large enterprise organizations, and evolution here helps the project as a whole. Patrick East, a Pure Storage guy who recently joined the Cinder Core team tells me there’s a lot of work to be done and I can’t wait to see it!
- Support for array-native replication. Now in Mitaka. Works great with my array and I’d love to see what people think of it.
So if you’re at the Austin OpenStack Summit, swing by the Pure Storage stand and say hi to the Pure Storage team for me. They’d be happy to help you understand how we continue to support the foundation and talk about our approach.
And if you’re in Austin, have fun. I’m jealous and wish I were there.
Want to learn more about the Fibre Channel Zone Manager and Glance Image Cache? You can log into the Pure Storage community and read a couple of good papers by Simon Dodsley.
It’s a simple fact — storage is one of the primary bottlenecks in any virtualized data center. It’s slow disk and inconsistent hybrid systems that have been holding back VI performance. Storage is often the reason why resource-intensive (think large DBs) or aggressive (think VDI) workloads remain either unvirtualized or siloed.
Flash-based shared storage removes the storage bottleneck and helps you get the most out of your virtualization investments. But to get the absolute most out of ESX, you’ll want to follow some new best practices and configure VMware just a little bit differently.
Here at Pure Storage we’ve developed and published a vSphere best practices guide that’s the best place to go to learn how to maximize your return on your storage and virtualization investments. You should definitely download and read it if you’re going to deploy vSphere on any flash storage, particularly the FlashArray.
We have also collected a number of videos of Pure Storage virtualization gurus and vExperts discussing best practices topics — check these out if you’re in a hurry and want a quick primer on the following videos by clicking the thumbnails. You can also meet 1:1 with these virtualization experts at VMworld 2015 — if you’re attending, just register for a meeting and you can ask them your questions directly.