Eric Fouarge on Open Source Tools in Cloud, Business Needs and Microservices, and Reality of Serverless

Joining us this week is Eric Fouarge, CTO at Root Level Technology.

About Root Level

Root Level Technology is a cloud strategy partner. We are the seamless extension of your development and programming teams. We provide a concierge-style support experience for every client, no matter the size. We are an agile shop at the core, with a focus on Continuous Integration and Continuous Deployment. We are the hold your hand, wake us up at midnight, 5-star, real deal support clients have always wanted.

Highlights:

  • Discussion on Tools for Cloud Native
  • Business Needs and Microservices
  • Issues with New, Rapidly Developing Tools
  • Impact of Cloud Native on Operations/Development
  • Severless Alternatives to Lamda, etc

Time-Line

  • 0 min 49 sec: Introduction of Guest
  • 2 min 32 sec: Suite of Tools Being Used
    • Open Source Tools vs Proprietary Tools
  • 6 min 34 sec: Tradeoffs in Complexity and Tools
    • Time to Production
  • 7 min 43 sec: Do Customers bring their own Devs to Project?
  • 8 min 49 sec: Business Needs Driving Architectural Decisions at Microservice Level
    • Warning Signs in this Process
  • 12 min 04 sec: Is Cloud Native/DevOps Tooling Different?
    • Change Rate on New Tooling – e.g. Istio
    • What are some Best Practices in this Space?
    • How do you sell the Capabilities?
  • 18 min 34 sec: Return of Process Development in the Enterprise via Cloud Native
  • 19 min 52 sec: Serverless Alternatives to Lambda, etc
    • Kubernetes Options vs Vendor Options
    • Serverless does not eliminate the basics
    • Value of Service Mesh
  • 27 min 26 sec: How to Learn Service Mesh
    • Not Trivial Technology to Learn and Use
    • Rise of Kubernetes being Foundational
  • 31 min 30 sec: Orchestration of Building Cloud Apps
    • Terraform Issues in Production
  • 33 min 21 sec: Wrap-Up

Oliver Gould on Service Mesh, Containers, and Edge

Joining us this week is Oliver Gould, CTO Buoyant who provides a service mesh abstraction view to micro-services and Kubernetes. Oliver and Rob also take a look at how applications are managed at the edge and highlights the future roadmap for Conduit.

Highlights
• Defining microservices and Kubernetes from Buoyant viewpoint
• Service mesh abstractions at a request level (load balance, get, put, …)
• Conduit overview – client-side load balancing
• Service mesh tool comparisons
• Edge Computing discussion from service mesh view

Haseeb Budhani on App Development for Edge and Cloud Best Fit

Joining us this week is Haseeb Budhani, Co-Founder and CEO, Rafay Systems.

Highlights
• Building an application deployment platform as close to the Edge as possible
• Supporting containers, microservices (move latency sensitive parts of app to Edge) and availability of infrastructure
• Definition of Edge to Rafay Systems
• Issues of massive amount of data at the Edge to be handled – Use Cases
• Will Edge suffer from device specific infrastructure needs?
• Application bottlenecks and impact of cloud locations and end user
• Placement control of services is still an open issue based on user requirements
• IT infrastructure and ownership and performance issues (IT vs Operation Teams)
• Cloud and Edge are not competitive; they work together to offer applications best fit

Bugs Bunny, Prince and Enabling True Hybrid Infrastructure Consumption

OK- Stay with me on this. I’m drawing parallels again.  🙂

Like many from my generation, my initial exposure to classical music and opera was derived from Bugs Bunny on Saturday mornings (culturally deprived, I know). One of the cartoons I remember well was with Bugs trying to get even with the heavy-set opera singer who disrupts Bugs’ banjo playing. In order to exact his revenge, Bugs infiltrates the opera singer’s concert by impersonating the famous long-hared (hared…get it?) conductor, Leopold Stokowski. He proceeds to force the tenor to hit octaves that structurally compromise the amphitheater and as it crumbles leaves him bruised and battered. Bugs is as always, victorious.

bugs

In examining Bugs’ strategy (let’s assume he actually had one), Bugs took over operations of the orchestra’s musical program to achieve his goal of getting the tenor “in-line” so to speak. As I prepare to head down to the OpenStack Conference in Austin, TX next week, I’m seeing similar patterns develop in the cloud and data center infrastructure space which are very “Bugs/Leopold-like”. With organizations deciding on how to consolidate data centers, containerize apps and move to the cloud, vendors and open source technologies offer value, however true operational, infrastructure and platform independence are not what they appear to be. For example, once you move your apps off the data center to AWS or VMware and then later determine you are paying too much or the workload is no longer is appropriate for the infrastructure, good luck replicating the configuration work done on CloudFormation on another cloud or back in the data center. Same rationale is applicable to other technologies such as converged infrastructure and proprietary private cloud platforms. As the customer, to achieve scale and remove operational pain you must fall in line. That in itself is a big commitment to make in a still-evolving and maturing technology industry and a dynamic business climate.

On an unrelated topic, I was saddened to learn of the passing of Prince this past week. While not a die-hard fan, I liked his music. He was a great composer of songs and had a style all to his own. Beyond his music and sheer talent, I admired his business beliefs and deep desire to maintain creative ownership and control of his music and his brand.

princeDespite his fortune and fame, there was a period in the middle of Prince’s career in which he felt creatively and financially locked-in by the big record companies. Once Prince (and the unpronounceable symbol) broke away from Warner Music, he was able to produce music under his own label. This action enabled him to create music without a major record label dictating when he needed to produce a new album and what it needed to sound like. In addition, he was now able to market his new recordings to the distribution platform that supported his artistic and financial goals. While still having ties to Warner Music, he was no longer bound by their business practices. Along with starting his own music subscription service, Prince cut deals with Arista, Columbia, iTunes and Sony. Prince’s music production had operational portability, business agility and choice (seven Grammy awards and 100 million record sales also help create that kind of leverage.).

While open APIs and containers offer some portability, at RackN we believe they do not offer a completely free market experience to the cloud and infrastructure consumer. If the business decides it is paying too much for AWS, it should not allow for the operational underlay and configuration complexity to lock them to the infrastructure provider. They should be able to transfer their business to Google, Azure, Rackspace or Dreamhost with ease. We believe technologies that create portable, composable operational workflows drive true infrastructure and platform independence and as a benefit, reduces business risk. Choosing a platform and being forced to use it are two very different things.

In conclusion, when considering moving workloads to the cloud, converged infrastructure platforms or using DevOps automation tools, consider how you can achieve programmable operational portability and agility. Think about how you can best absorb new technologies without causing operational disruption in your infrastructure. Furthermore, ensure you can accomplish this in a repeatable, automated fashion. Analyze how you can abstract away complex configurations for security, networking and container orchestration technologies and make them adaptable from one infrastructure platform to another. Attempt to eliminate configuration versioning as much as possible and make upgrades simplistic and automated so your DevOps staff does not have to be experts (they are stressed out enough.).

If you are attending the OpenStack Conference this week, look me up. While I am far from a music expert, i’ll be happy to share with you my insights on how to spot a technology vendor that likes to play a purple guitar as opposed to one that eats carrots and plays the banjo.

-Dan Choquette: Co-Founder, RackN

 

 

 

Faster, Simpler AND Smaller – Immutable Provisioning with Docker Compose!

Nearly 10 TIMES faster system resets – that’s the result of fully enabling an multi-container immutable deployment on Digital Rebar.

Docker ComposeI’ve been having a “containers all the way down” month since we launched Digital Rebar deployment using Docker Compose. I don’t want to imply that we rubbed Docker on the platform and magic happened. The RackN team spent nearly a year building up the Consul integration and service wrappers for our platform before we were ready to fully migrate.

During the Digital Rebar migration, we took our already service-oriented code base and broke it into microservices. Specifically, the Digital Rebar parts (the API and engine) now run in their own container and each service (DNS, DHCP, Provisioning, Logging, NTP, etc) also has a dedicated container. Likewise, supporting items like Consul and PostgreSQL are, surprise, managed in dedicated containers too. All together, that’s over nine containers and we continue to partition out services.

We use Docker Compose to coordinate the start-up and Consul to wire everything together. Both play a role, but Consul is the critical glue that allows Digital Rebar components to find each other. These were not random choices. We’ve been using a Docker package for over two years and using Consul service registration as an architectural choice for over a year.

Service registration plays a major role in the functional ops design because we’ve been wrapping datacenter services like DNS with APIs. Consul is a separation between providing and consuming the service. Our previous design required us to track the running service. This worked until customers asked for pluggable services (and every customer needs pluggable services as they scale).

Besides being a faster to reset the environment, there are several additional wins:

  1. more transparent in how it operates – it’s obvious which containers provide each service and easy to monitor them as individuals.
  2. easier to distribute services in the environment – we can find where the service runs because of the Consul registration, so we don’t have to manage it.
  3. possible to have redundant services – it’s easy to spin up new services even on the same system
  4. make services pluggable – as long as the service registers and there’s an API, we can replace the implementation.
  5. no concern about which distribution is used – all our containers are Ubuntu user space but the host can be anything.
  6. changes to components are more isolated – changing one service does not require a lot of downloading.

Docker and microservices are not magic but the benefits are real. Be prepared to make architectural investments to realize the gains.