We discuss the recent news of Docker looking for more revenue and discuss why Docker has money challenges by understanding Kubernetes and comparing it to SQL.
Joining us this week is Michael DeHaan from Vespene.io, a modern, streamlined build and self-service automation platform.
- Vespene Introduction ~ only 3 month old project
- Open Source Licensing ~ is there a crisis?
- How best to run an Open Source Project
Sheng Liang, Founder and CEO of Rancher joins the podcast this week.
• Develop on Docker Deploy on Kubernetes
• Edge IT Infrastructure Use Cases and Kubernetes
• Kubernetes as a Platform and Future Enhancements
• Rancher OS Discussion
Rob Hirschfeld talks with David Linthiucm, SVP Cloud Technology Partners on a variety of Cloud related topics including DevOps, Containers, Edge Computing, etc.
CaaPuccino: A frothy mix of containers and platforms.
Check out Krish Subramanian’s (@krishnan) Modern Enterprise podcast (audio here) today for a surprisingly deep and thoughtful discussion about how frothy new technologies are impacting Modern Enterprise IT. Of course, we also take some time to throw some fire bombs at the end. You can use my notes below to jump to your favorite topics.
The key takeaways are that portability is hard and we’re still working out the impact of container architecture.
The benefit of the longer interview is that we really dig into the reasons why portability is hard and discuss ways to improve it. My personal SRE posts and those on the RackN blog describe operational processes that improve portability. These are real concerns for all IT organizations because mixed and hybrid models are a fact of life.
If you are not actively making automation that works against multiple infrastructures then you are building technical debt.
Of course, if you just want the snark, then jump forward to 24:00 minutes in where we talk future of Kubernetes, OpenStack and the inverted intersection of the projects.
Krish, thanks for the great discussion!
Rob’s Podcast Notes (39 minutes)
2:37: Rob intros about Digital Rebar & RackN
4:50: Why our Kubernetes is JUST UPSTREAM
5:35: Where are we going in 5 years > why Rob believes in Hybrid
- Should not be 1 vendor who owns everything
- That’s why we work for portability
- Public cloud vision: you should stop caring about infrastructure
- Coming to an age when infrastructure can be completely automated
- Developer rebellion against infrastructure
8:36: Krish believes that Public cloud will be more decentralized
- Public cloud should be part of everyone’s IT plan
- It should not be the ONLY thig
9:25: Docker helps create portability, what else creates portability? Will there be a standard
- Containers are a huge change, but it’s not just packaging
- Smaller units of work is important for portability
- Container schedulers & PaaS are very opinionated, that’s what creates portability
- Deeper into infrastructure loses portability (RackN helps)
- Rob predicts that Lambda and Serverless creates portability too
11:38: Are new standards emerging?
- Some APIs become dominate and create de facto APIs
- Embedded assumptions break portability – that’s what makes automation fragile
- Rob explains why we inject configuration to abstract infrastructure
- RackN works to inject attributes instead of allowing scripts to assume settings
- For example, networking assumptions break portability
- Platforms force people to give up configuration in ways that break portability
14:50: Why did Platform as a Service not take off?
- Rob defends PaaS – thinks that it has accomplished a lot
- Challenge of PaaS is that it’s very restrictive by design
- Calls out Andrew Clay Shafer’s “don’t call it a PaaS” position
- Containers provide a less restrictive approach with more options.
17:00: What’s the impact on Enterprise? How are developers being impacted?
- Service Orientation is a very important thing to consider
- Encapsulation from services is very valuable
- Companies don’t own all their IT services any more – it’s not monolithic
- IT Service Orientation aligns with Business Processes
Rob says the API economy is a big deal
- In machine learning, a business’ data may be more valuable than their product
19:30: Services impact?
- Service’s have a business imperative
- We’re not ready for all the impacts of a service orientation
- Challenge is to mix configuration and services
- Magic of Digital Rebar is that it can mix orchestration of both
22:00: We are having issues with simple, how are we going to scale up?
- Barriers are very low right now
22:30: Will Kubernetes help us solve governance issues?
- Kubernetes is doing a go building an ecosystem
- Smart to focus on just being Kubernetes
- It will be chaotic as the core is worked out
24:00: Do you think Kubernetes is going in the right direction?
- Rob is bullish for Kubernetes to be the dominant platform because it’s narrow and specific
- Google has the right balance of control
- Kubernetes really is not that complex for what it does
- Mesos is also good but harder to understand for users
- Swarm is simple but harder to extend for an ecosystem
- Kubernetes is a threat to Amazon because it creates portability and ecosystem outside of their platform
- Rob thinking that Kubernetes could create platform services that compete with AWS services like RDS.
- It’s likely to level the field, not create a Google advantage
27:00: How does Kubernetes fit into the Digital Rebar picture?
- We think of Kubernetes as a great infrastructure abstraction that creates portability
- We believe there’s a missing underlay that cannot abstract the infrastructure – that’s what we do.
- OpenStack deployments broken because every data center is custom and different – vendors create a lot of consulting without solving the problem
- RackN is creating composability UNDER Kubernetes so that those infrastructure differences do not break operation automation
- Kubernetes does not have the constructs in the abstraction to solve the infrastructure problem, that’s a different problem that should not be added into the APIs
- Digital Rebar can also then use the Kubernetes abstractions?
30:20: Can OpenStack really be managed/run on top of Kubernetes? That seems complex!
- There is a MESS in the message of Kubernetes under OpenStack because it sends the message that Kubernetes is better at managing application than OpenStack
- Since OpenStack is just an application and Kubernetes is a good way to manage applications
- When OpenStack is already in containers, we can use Kubernetes to do that in a logical way
- “I’m super impressed with how it’s working” using OpenStack Helm Packs (still needs work)
- Physical environment still has to be injected into the OpenStack on Kubernetes environment
35:05 Does OpenStack have a future?
- Yes! But it’s not the big “data center operating system” future that we expected in 2010. Rob thinks it a good VM management platform.
- Rob provides the same caution for Kubernetes. It will work where the abstractions add value but data centers are complex hybrid beasts
- Don’t “square peg a data center round hole” – find the best fit
- OpenStack should have focused on the things it does well – it has a huge appetite for solving too many problems.
For additional conference notes, check out Rob Hirschfeld’s Dockercon retro blog post.
Three Concerns with Immutable O/S on Physical
With a mix of excitement and apprehension, the RackN team has been watching physical deployment of immutable operating systems like CoreOS Container Linux and RancherOS. Overall, we like the idea of a small locked (aka immutable) in-memory image for servers; however, the concept does not map perfectly to hardware.
Note: if you want to provision these operating systems in a production way, we can help you!
These operating systems work on a “less is more” approach that strips everything out of the images to make them small and secure.
This is great for cloud-first approaches where VM size has a material impact in cost. It’s particularly matched for container platforms where VMs are constantly being created and destroyed. In these cases, the immutable image is easy to update and saves money.
So, why does that not work as well on physical?
First: HA DHCP?! It’s not as great a map for physical systems where operating system overhead is pretty minimal. The model requires orchestrated rebooting of your hardware. It also means that you need a highly available (HA) PXE Provisioning infrastructure (like we’re building with Digital Rebar).
Second: Configuration. That means that they must rely on having cloud-init injected configuration. In a physical environment, there is no way to create cloud-init like injections without integrating with the kickstart systems (a feature of Digital Rebar Provision). Further, hardware has a lot more configuration options (like hard drives and network interfaces) than VMs. That means that we need a robust and system-by-system way to manage these configurations.
Third: No SSH. Yes another problem with these minimal images is that they are supposed to eliminate SSH. Ideally, their image and configuration provides everything required to run the image without additional administration. Unfortunately, many applications assume post-boot configuration. That means that people often re-enable SSH to use tools like Ansible. If it did not conflict with the very nature of the “do-not configure-the-server” immutable model, I would suggest that SSH is a perfectly reasonable requirement for operators running physical infrastructure.
In Summary, even with those issues, we are excited about the positive impact this immutable approach can have on data center operations.
With tooling like Digital Rebar, it’s possible to manage the issues above. If this appeals to you, let us know!
Welcome to the weekly post of the RackN blog recap of all things SRE. If you have any ideas for this recap or would like to include content please contact us at email@example.com or tweet Rob (@zehicle) or RackN (@rackngo)
SRE Items of the Week
DigitalRebar Provision deploy Docker’s LinuxKit Kubernetes
Install Digital Rebar PXE Provision on a Mac OSX System and Test Boot using Virtual Box
Packet Pushers 333 Automation & Orchestration in Networking
While the discussion is all about NETWORK DevOps, they do a good job of decrying WHY current state of system orchestration is so sad – in a word: heterogeneity. It’s not going away because the alternative is lock-in. They also do a good job of describing the difference between automation and orchestration; however, I think there’s a middle tier of resource “scheduling” that better describes OpenStack and Kubernetes.
Around 5:00 minutes into the podcast, they effectively describe the composable design of Digital Rebar and the rationale for the way that we’ve abstracted interfaces for automation. If you guys really do want to cash in by consulting with it (at 10 minutes), just contact Rob H.
Digital Magazine Launch: Increment On-Call
Increment is dedicated to covering how teams build and operate software systems at scale, one issue at a time. In this, our inaugural issue, we focus on industry best practices around on-call and incident response.
Need PXW? Try out this Cobbler Replacement
We wanted to make open basic provisioning API-driven, secure, scalable and fast. So we carved out the Provision & DHCP services as a stand alone unit from the larger open Digital Rebar project. While this Golang service lacks orchestration, this complete service is part of Digital Rebar infrastructure and supports the discovery boot process, templating, security and extensive image library (Linux, ESX, Windows, … ) from the main project.
TL;DR: FIVE MINUTES TO REPLACE COBBLER? YES.
The project APIs and CLIs are complete for all provisioning functions with good Swagger definitions and docs. After all, it’s third generation capability from the Digital Rebar project. The integrated UX is still evolving.
Rob Hirschfeld and Greg Althaus are preparing for a series of upcoming events where they are speaking or just attending. If you are interested in meeting with them at these events please email firstname.lastname@example.org.
DevOpsDays Austin : May 4-5, 2017 in Austin TX
- CloudNative vs SRE vs DevOps: The Ultimate Server Cage Match
- Not Actually a DevOps Talk with Michael Cote (May 4 at 4:50pm)
OpenStack Summit : May 8 – 11, 2017 in Boston, MA
- OpenStack and Kubernetes. Combining the best of both worlds – Kubernetes Day
Interop ITX : May 15 – 19, 2017 in Las Vegas, NV
- Open Source IT Summit – Tuesday, May 16, 9:00 – 5:00pm : Rob Hirschfeld to speak
Gluecon : May 24 – 25, 2017 in Denver, CO
- Surviving Day 2 in Open Source Hybrid Automation – May 23, 2017 : Rob Hirschfeld and Greg Althaus
TL;DR: The days of using open software passively from vendors are past, users need to have a voice and opinion about project governance. This post is a joint effort with Rob Hirschfeld, RackN, and Chris Ferris, IBM, based on their IBM Interconnect 2017 “Open Cloud Architecture: Think You Can Out-Innovate the Best of the Rest?” presentation.
It’s a common misconception that open source collaboration means saying YES to all ideas; however, the reality of successful projects is the opposite.
Permissive open source licenses drive a delicate balance for projects. On one hand, projects that adopt permissive licenses should be accepting of contributions to build community and user base. On the other, maintainers need to adopt a narrow focus to ensure project utility and simplicity. If the project’s maintainers are too permissive, the project bloats and wanders without a clear purpose. If they are too restrictive then the project fails to build community.
It is human nature to say yes to all collaborators, but that can frustrate core developers and users.
For that reason, stronger open source projects have a clear, focused, shared vision. Historically, that vision was enforced by a benevolent dictator for life (BDFL); however, recent large projects have used a consensus of project elders to make the task more sustainable. These roles serve a critical need: they say “no” to work that does not align with the project’s mission and vision. The challenge of defining that vision can be a big one, but without a clear vision, it’s impossible for the community to sustain growth because new contributors can dilute the utility of projects. [author’s note: This is especially true of celebrity projects like OpenStack or Kubernetes that attract “shared glory” contributors]
There is tremendous social and commercial pressure driving this vision vs. implementation balance.
The most critical one is the threat of “forking.” Forking is what happens when the code/collaborator base of a project splits into multiple factions and stops working together on a single deliverable. The result is incompatible products with a shared history. While small forks are required to support releases, and foster development; diverging community forks can have unpredictable impacts for a project.
Forks are not always bad: they provide a control mechanism for communities.
The fundamental nature of open source projects that adopt a permissive license is what allows forks to become the primary governance tool. The nature of permissive licenses allows anyone to create a new line of development that’s different than the original line. Forks can allow special interests in a code base to focus on their needs. That could be new features or simply stabilization. Many times, a major release version of a project evolves into forks where both old and newer versions have independent communities because of deployment inertia. It can also allow new leadership or governance without having to directly displace an entrenched “owner”.
But forking is expensive because it makes it harder for communities to collaborate.
To us, the antidote for forking is not simply vision but a strong focus on interoperability. Interoperability (or interop) means ensuring that different implementations remain compatible for users. A simplified example would be having automation that works on one OpenStack cloud also work on all the others without modification. Strong interop creates an ecosystem for a project by making users confident that their downstream efforts will not be disrupted by implementation variance or version changes.
Good Interop relieves the pressure of forking.
Interop can only work when a project defines what is expected behavior and creates tests that enforce those standards. That activity forces project contributors to agree on project priorities and scope. Projects that refuse to define interop expectations end up disrupting their user and collaborator base in frustrating ways that lead to forking (Rob’s commentary on the potential Docker fork of 2016).
Unfortunately, Interop is not a generally a developer priority.
In the end, interoperability is a user feature that competes with other features. Sadly, it is often seen as hurting feature development because new features must work to maintain existing interop standards. For that reason, new contributors may see interop demands as a impediment to forward progress; however, it’s a strong driver for user adoption and growth.
The challenge is that those users are typically more focused on their own implementation and less visible to the project leadership. Vendors have similar disincentives to do work that benefits other vendors in the community. These tensions will undermine the health of communities that do not have strong BDFL or Elders leadership. So, who then provides the adult supervision?
Ultimately, users must demand interop and provide commercial preference for vendors that invest in interop.
Open source has definitely had an enormous impact on the software industry; generally, a change for the better. But, that change comes at a cost – the need for involvement, not just of vendors and individual developers, but, ultimately it demands the participation of consumers/users.
Interop isn’t naturally a vendor priority because it levels the playing field for all vendors; however, vendors do prioritize what their customers want.
Ideally, customer needs translate into new features that have a broad base of consumer interest. Interop ensure that features can be used broadly. Thus interop is an important attribute to consumers not only for vendors, but by the open source communities building the software. This alignment then serves as the foundation upon which (increasingly) that vendor software is based.
Customers should be actively and publicly supportive of interop efforts of projects on which their vendor’s offerings depend. If there isn’t such an initiative in those projects, then they should demand one be started through their vendor partners and in the public forums for the project.
Further, if consumers of an open source project sense that it lacks a strong, focused, vision and is wandering off course, they need to get involved and say so, either directly and/or through their vendor partners.
While open source has changing the IT industry, it also has a cost. The days of using software passively from vendors are past, users need to have a voice and opinion. The need to ensure that their chosen vendors are also supporting the health of the community.
What do you think? Reach out to Rob (@zehicle) and Chris (@christo4ferris) and let us know!
Gene Kim (@RealGeneKim) posted an exclusive Q&A with Rob Hirschfeld (@zehicle) today on IT Technology: Rob Hirschfeld on Containers, Private Clouds, GIFEE, and the Remaining “Underlay Problem.”
Questions from the post:
- Gene Kim: Tell me about the landscape of docker, OpenStack, Kubernetes, etc. How do they all relate, what’s changed, and who’s winning?
- GK: I recently saw a tweet that I thought was super funny, saying something along the lines “friends don’t let friends build private clouds” — obviously, given all your involvement in the OpenStack community for so many years, I know you disagree with that statement. What is it that you think everyone should know about private clouds that tell the other side of the story?
- GK: We talked about how much you loved the book Site Reliability Engineering: How Google Runs Production Systems by Betsy Beyer, which I also loved. What resonated with you, and how do you think it relates to how we do Ops work in the next decade?
- GK: Tell me what about the work you did with Crowbar, and how that informs the work you’re currently doing with Digital Rebar?
Read the full Q&A here.
Container workloads have the potential to redefine how we think about scale and hosted infrastructure.
Last Fall, Ubiquity Hosting and RackN announced a 200 node Docker Swarm cluster as a phase one of our collaboration. Unlike cloud-based container workloads demonstrations, we chose to run this cluster directly on the bare metal.
Why bare metal instead of virtualized? We believe that metal offers additional performance, availability and control.
With the cluster automation ready, we’re looking for customers to help us prove those assumptions. While we could simply build on many VMs, our analysis is the a lot of smaller nodes will distribute work more efficiently. Since there is no virtualization overhead, lower RAM systems can still give great performance.
The collaboration with RackN allows us to offer customers a rapid, repeatable cluster capability. Their Digital Rebar automation works on a broad spectrum of infrastructure allow our users to rehearse deployments on cloud, quickly change components and iteratively tune the cluster.
We’re finding that these dedicated metal nodes have much better performance than similar VMs in AWS? Don’t believe us – you can use Digital Rebar to spin up both and compare. Since Digital Rebar is an open source platform, you can explore and expand on it.
The Docker Swarm deployment is just a starting point for us. We want to hear your provisioning ideas and work to turn them into reality.