Joining us this week is Sarbjeet Johal, Principal Advisor, The Batchery.
About The Batchery Founded in 2015, The Batchery is an Berkeley-based global incubator for seed stage entrepreneurs ready to take their startup to the next level. We are a community of veteran investors and advisers ready to provide you with ideas, insights, and networks. Our partnerships with law firms, technology providers, and other startup services means that you start building your company the minute you join us.
Highlights: • Latest in Data Centers and Hybrid Clouds • Amazon Announcement on Outpost and ReInvent Thoughts • Design Approaches of Cloud and Future Technology
Welcome to the final L8istSh9y Podcast for 2017 with a recap of Rob Hirschfeld’s predictions for 2017 (2016 Infrastructure Revolt makes 2017 the “year of the IT Escape Clause”) as well as a look ahead into 2018. Key topics covered in the podcast:
Hybrid is Reality; How do I Cope with it?
Site Reliability Engineering; People are Just Doing it
Bare Metal to Immutable Images
Virtualization Decline with Bare Metal Growth
2018 is not the Year of Serverless
Edge Computing Still Not Ready for Prime Time
OpenStack Foundation as Open Infrastructure Group
I’m a regular participant on BWG Roundtable calls and often extend those discussions 1×1. This post collects questions from one of those follow-up meetings where we explored how data center markets are changing based on new capacity and also the impact of cloud.
We both believe in the simple answer, “it’s going to be hybrid.” We both feel that this answer does not capture the real challenges that customers are facing.
Rob: I know that we’re building a lot of data center capacity. So far, it’s been really hard to move operations to new infrastructure and mobility is a challenge. Do you see this too?
Haynes: Yes. Creating a data center network that is both efficient and affordable is challenging. A couple of key data center interconnection providers offer this model, but few companies are in a position to truly leverage the node-cloud-node model, where a company leverages many small data center locations (colo) that all connect to a cloud option for the bulk of their computing requirements. This works well for smaller companies with a spread-out workforce, or brand new companies with no legacy infrastructure, but the Fortune 2000 still have the majority of their compute sitting in-house in owned facilities that weren’t originally designed to serve as data centers. Moving these legacy systems is nearly impossible.
Rob: I see many companies feeling trapped by these facilities and looking to the cloud as an alternative. You are describing a lot of inertia in that migration. Is there something that can help improve mobility?
Haynes: Data centers are physical presences to hold virtual environments. The physical aspect can only be optimized when a company truly understands its virtual footprint. IT capacity planning is key to this. System monitoring and usage analytics are critical to make growth and consolidation decisions. Why isn’t this being adopted more quickly? Is it cost? Is it difficulty to implement in complex IT environments? Is it the fear of the unknown?
Rob: I think that it’s technical debt that makes it hard (and scary) to change. These systems were built manually or assuming that IT could maintain complete control. That’s really not how cloud-focused operations work. Is there a middle step between full cloud and legacy?
Haynes: Creating an environment where a company maximizes the use for its owned assets (leveraging sale leasebacks and forward-thinking financing) vs. waiting until end of life and attempting to dispose leads to opportunities to get capital injections early on and move to an OPEX model. This makes the transition to colo much easier, and avoids a large write-down that comes along with most IT transformations. Colocation is an excellent tool if it is properly negotiated because it can provide a flexible environment that can grow or shrink based on your utilization of other services. Sophisticated colo users know when it makes sense to pay top dollar for an environment that requires hyperconnectivity and when to save money for storage and day-to-day compute. They know when to leverage providers for services and when to manage IT tasks in-house. It is a daunting process, but the initial approach is key to getting to that place in the long term.
Rob: So I’m back to thinking that the challenge for accessing all these colo opportunities is that it’s still way too hard to move operations between facilities and also between facilities and the cloud. Until we improve mobility, choosing a provider can be a high stakes decision. What factors do you recommend reviewing?
Haynes: There is an overwhelming number of factors in picking new colos:
Cloud Connectivity Options
Quality of Services
Hazard Risk Mitigation
Comfort with services/provider
Flexibility of spend/portability (this is becoming ever-more important)
Rob: Yikes! Are there minor operational differences between colos that are causing breaking changes in operations?
Haynes: We run into this with our clients occasionally, but it is usually because they created two very different environments with different providers. This is a big reason to use a broker. Creating identical terms, pricing models, SLAs and work flows allow for clients to have a lot of leverage when they go to market. A select few of the top cloud providers do a really good job of this. They dominate the markets that they enter because they have a consistent, reliable process that is replicated globally. They also achieve some of the most attractive pricing and terms in the marketplace on a regular basis.
Rob: That makes sense. Process matters for the operators and consistent practices make it easier to work with a partner. Even so, moving can save a lot of money. Is that savings justified against the risk and interruption?
Haynes: This is the biggest hurdle that our enterprise clients face. The risk of moving is risking an IT leader’s job. How do we do this with minimal risk and maximum upside? Long-term strategic planning is one answer, but in today’s world, IT leadership changes often and strategies go along with that. We don’t have a silver bullet for this one – but are always looking to partner with IT leaders that want to give it a shot and hopefully save a lot of money.
Rob: So is migration practical?
Haynes: Migration makes our clients cringe, but the ones that really try to take it on and make it happen strategically (not once it is too late) regularly reap the benefits of saving their company money and making them heroes to the organization.
Rob: I guess that brings us back to mixing infrastructures. I know that public clouds have interconnect with colos that make it possible to avoid picking a single vendor. Are you seeing this too?
Haynes: Hybrid, hybrid, hybrid. No one is the best one-stop shop. We all love 7-11 and it provides a lot of great solutions on the run, but I’m not grocery shopping there. Same reason I don’t run into a Kroger every time I need a bottle of water. Pick the right solution for the right application and workload.
Rob: That makes sense to me, but I see something different in practice. Teams are too busy keeping the lights on to take advantage of longer-term thinking. They seem so busy fighting fires that it’s hard to improve.
Haynes: I TOTALLY agree. I don’t know how to change this. I get it, though. The CEO says, “We need to be in the cloud, yesterday,” and the CIO jumps. Suddenly everyone’s strategic planning is out the window and it is off to the races to find a quick-fix. Like most things, time and planning often reap more productive results.
Thanks for sharing our discussion!
We’d love to hear your opinions about it. We both agree that creating multi-site management abstractions could make life easier on IT and relatable to real estate and finance. With all of these organizations working in sync the world would be a better place. The challenge is figuring out how to get there!