Do we have a Right To Right for Data & IP?

Right to Repair is the idea that when you buy a product, you’re able to fix it. We’ve been building products lately that don’t have that inherent part of the contract.

In this episode, we really took Right to Repair to another level talking about Intellectual Property (IP) and ownership of that IP in the software components.

This topic impacts every single business and every single consumer!

Transcript: otter.ai/u/7EVT0C9T0KDCcUIWBsGYKHdiT6Y
Photo: Photo by Blue Bird from Pexels [ID 7218008]

That’s Not Terraform Orchestration!

This episode is about Terraform orchestration, what some people might call a TACO, in which we actually tried to do cloud provisioning in a orchestrated way. But this is a really challenging thing to do!

Orchestration is really hard so our discussion kept coming back to saying that this isn’t orchestration at all: it’s Infrastructure as Code and management.

We need to find a consistent way to to run a workflow or a control plane. We’re not even getting to the point where we’re coordinating or orchestrating aspects of different systems and using remote or API driven infrastructure.

Even if you use Terraform, you will get a lot out of this discussion!

Transcript: otter.ai/u/Ohbfr0Uprm95WYYI4357IdUodOU
Photo by Gabriel Santos Fotografia from Pexels [ID 2102568]

Distributed Infrastructure

With Distributed Infrastructure and the Edge, we cover the challenges of managing applications that are, by definition spread out throughout heterogeneous infrastructure.

Distributed Control is designed to control systems that are are not in cloud data centers with localized compute and storage. But then how do we manage it?

We discussed details about how these systems get built, and kept coming back to “do we need to have localized processing?” If we do, how do we manage it?

Transcript: otter.ai/u/BkxvOrQMmmQiYQpxa-OogrMyNNw
Photo by KEHN HERMANO from Pexels [ID 3881034]

Edge Impact of Digital Twins

We talk about Digital Twins and the Edge with Simon Crosby from Swim.AI. They are literally building digital twins in edge locations so he has a lot to share.

We work to expand and understand how Simon’s experience translates into general cases and what we’re seeing in the edge. The systems that we’re trying to build are at the intersection of models and “connectedness” of all the components for the edge.

These designs don’t fit traditional models and it is what makes edge unique. Edge is not a single application, but a connected system that going to have to emerge to make all this work together.

Transcript: otter.ai/u/-uFSclONwRhhc4QlFywiSJAIF10
Photo by Dmitriy Ganin from Pexels [ID 7538096]

Topics for a Security Training Course

DevOps Lunch and Learn was about security practices. Specifically, we built an outline of topics in security that we think are necessary for developers and operators to build secure applications.

We basically built a week long course curriculum!

As we go through what this course curriculum we walk through who needs to know this information and why.

If you want to see all of the detail here, please see: docs.google.com/document/d/1x5QLP…ng=h.c2phqte5q4pl

Transcript: otter.ai/u/UyMAmiHi-rRAreMa0FjxaVNomhQ
Photo by PhotoMIX Company from Pexels [ID 226746]

Smaller Nodes? Just the Right Size for Docker!

Container workloads have the potential to redefine how we think about scale and hosted infrastructure.

Last Fall, Ubiquity Hosting and RackN announced a 200 node Docker Swarm cluster as a phase one of our collaboration. Unlike cloud-based container workloads demonstrations, we chose to run this cluster directly on the bare metal.  

Why bare metal instead of virtualized? We believe that metal offers additional performance, availability and control.  

With the cluster automation ready, we’re looking for customers to help us prove those assumptions. While we could simply build on many VMs, our analysis is the a lot of smaller nodes will distribute work more efficiently. Since there is no virtualization overhead, lower RAM systems can still give great performance.

The collaboration with RackN allows us to offer customers a rapid, repeatable cluster capability. Their Digital Rebar automation works on a broad spectrum of infrastructure allow our users to rehearse deployments on cloud, quickly change components and iteratively tune the cluster.

We’re finding that these dedicated metal nodes have much better performance than similar VMs in AWS?  Don’t believe us – you can use Digital Rebar to spin up both and compare.   Since Digital Rebar is an open source platform, you can explore and expand on it.

The Docker Swarm deployment is just a starting point for us. We want to hear your provisioning ideas and work to turn them into reality.

2015 Container Review

It’s been a banner year for container awareness and adoption so we wanted to recap 2015.  For RackN, container acceleration is near to our heart because we both enable and use them in fundamental ways.   Look for Rob’s 2016 predictions on his blog.

The RackN team has truly deep and broad experience with containers in practical use.  In the summer, we delivered multiple container orchestration workloads including Docker Swarm, Kubernetes, Cloud Foundry, StackEngine and others.  In the fall, we refactored Digital Rebar to use Docker Compose with dramatic results.  And we’ve been using Docker since 2013 (yes, “way back”) for ops provisioning and development.

To make it easier to review that experience, we are consolidating a list of our container related posts for 2015.

General Container Commentary

RackN & Digital Rebar Related

From Start to Scale: learn faster with heterogenous deployments

Why mix VMs and Physical? Having a consistent deploy approach can dramatically speed learning cycles that result in better scale ops. I would never deploy production OpenStack on VMs but I strongly recommend rehearsing that deployment on VMs hundreds of times before I touch metal.

Over the last two months, the RackN team redefined “heterogeneous” infrastructure in Digital Rebar from being “just” multi-vendor hardware to include any server resource from containers and Vagrant/Virtualbox to clouds like AWS or Packet. To support this truly diverse range, there were both technical and operational challenges to overcome.

The technical challenge rises from the fundamental control differences between cloud and physical infrastructure. In cloud, infrastructure is much more prescribed – you cannot change most aspects of your system and especially not your network interfaces or IPs. To provision hardware efficiently, we had to establish control over the very things that Cloud systems manage for you. 

That management diversity exercised the full extent of the Digital Rebar “functional ops” architecture.

Over the last year, we’ve been unwinding baked-in control assumptions from earlier versions of Digital Rebar. That added flexibility allows Digital Rebar to mix control APIs for infrastructure ranging from using Cobbler to Docker, Vagrant and AWS. Since we could already cope with heterogeneous control APIs using Digital Rebar’s unique functional ops design, we retained the ability to mix and match container, virtual and physical infrastructure.

The operational challenge was more subtle. We were motivated to make this change by first hand observations of the fidelity gap. I am a strong believer that container platforms will directly target metal in the next two years. The challenge is how do we get there from our current virtualization-focused infrastructure.

It’s easy to look at the completed work as an obvious step forward. Looking over my shoulder, I know that it took years of learning and perseverance to create a platform that was flexible enough to handle both extremes of control. Even more important was understanding why it was so important for a physical scale deployment platform to provide ops fidelity for developers too.

With the infrastructure work behind us, we’re seeing Digital Rebar deliver real operational transformation. We want to help IT embrace containers and immutable infrastructure without having to discard the hard won battles installing cloud and traditional infrastructure. Most critically, we hope that you’ll join our open community and share your operational journey with us.

Faster, Simpler AND Smaller – Immutable Provisioning with Docker Compose!

Nearly 10 TIMES faster system resets – that’s the result of fully enabling an multi-container immutable deployment on Digital Rebar.

Docker ComposeI’ve been having a “containers all the way down” month since we launched Digital Rebar deployment using Docker Compose. I don’t want to imply that we rubbed Docker on the platform and magic happened. The RackN team spent nearly a year building up the Consul integration and service wrappers for our platform before we were ready to fully migrate.

During the Digital Rebar migration, we took our already service-oriented code base and broke it into microservices. Specifically, the Digital Rebar parts (the API and engine) now run in their own container and each service (DNS, DHCP, Provisioning, Logging, NTP, etc) also has a dedicated container. Likewise, supporting items like Consul and PostgreSQL are, surprise, managed in dedicated containers too. All together, that’s over nine containers and we continue to partition out services.

We use Docker Compose to coordinate the start-up and Consul to wire everything together. Both play a role, but Consul is the critical glue that allows Digital Rebar components to find each other. These were not random choices. We’ve been using a Docker package for over two years and using Consul service registration as an architectural choice for over a year.

Service registration plays a major role in the functional ops design because we’ve been wrapping datacenter services like DNS with APIs. Consul is a separation between providing and consuming the service. Our previous design required us to track the running service. This worked until customers asked for pluggable services (and every customer needs pluggable services as they scale).

Besides being a faster to reset the environment, there are several additional wins:

  1. more transparent in how it operates – it’s obvious which containers provide each service and easy to monitor them as individuals.
  2. easier to distribute services in the environment – we can find where the service runs because of the Consul registration, so we don’t have to manage it.
  3. possible to have redundant services – it’s easy to spin up new services even on the same system
  4. make services pluggable – as long as the service registers and there’s an API, we can replace the implementation.
  5. no concern about which distribution is used – all our containers are Ubuntu user space but the host can be anything.
  6. changes to components are more isolated – changing one service does not require a lot of downloading.

Docker and microservices are not magic but the benefits are real. Be prepared to make architectural investments to realize the gains.