Friday, July 17, 2020

GitOps - Another Marketing Word Mashup?

In these recent years, technology marketing has taken to a “word mashup” approach to many new terms. We got MicroService, DevOps, DevSecOps, MLOps, AIOps, ChatOps, and even GitOps. Combining words for marketing terms doesn’t really tell us much about what is meant by these new concepts. Often the concepts are very abstract and yet to be fully defined. Yet, similar to Agile, they often require a paradigm shift in how people work together to accomplish goals similar to those they have always strived to complete successfully.

So as technologists what are we to make of all these terms. Some are marketing hype while others have more meaningful concepts underlying the terms. Often we ignore the new terms, but sometimes they warrant a look to see if they really mean anything or if they are just rebranding old concepts. For this article, I want to take a quick look (not in-depth) at GitOps.

Like many other marketing terms, it can be rather abstract. Some consider it to simply mean that you use git as a part of your development and deployment process. While others have applied a slightly more granular meaning to the term. Here are some definitions:

GitOps is a paradigm or a set of practices that empowers developers to perform tasks which typically fall under the purview of IT operations. GitOps requires us to describe and observe systems with declarative specifications that eventually form the basis of continuous everything.

— CloudBees

GitOps is a way of implementing Continuous Deployment for cloud native applications. It focuses on a developer-centric experience when operating infrastructure, by using tools developers are already familiar with, including Git and Continuous Deployment tools.

— gitops.tech

GitOps: versioned CI/CD on top of declarative infrastructure. Stop scripting and start shipping.

— Kelsey Hightower

GitOps in short is a set of practices to use Git pull requests to manage infrastructure and application configurations. Git repository in GitOps is considered the only source of truth and contains the entire state of the system so that the trail of changes to the system state are visible and auditable.

— Mario Vázquez, Redhat Openshift

Git as a source of truth for desired state of whole system yes really the whole system.

The goal is to describe everything: policies, code, configuration, and even monitored events and version control it all. Keeping everything under version control enforces convergence where changes can be reapplied if at first they didn't succeed.

— Alexis Richardson, Weaveworks

The basic concept with GitOps is that you use git and regular git practices (such as merge/pull requests) to manage and approve changes to the live system. This is a practice known by developers in how they manage their source code. For this to be possible you must be able to describe your “desired” state for the system then have a mechanism for applying that state to the live system and ensure it doesn’t diverge from that “desired” state (a desired/declarative state engine).

Kubernetes Only?

One of the goals of GitOps is to ensure the desired state is always represented in the live system. Kubernetes by default provides us with a way to describe our workloads/services/configurations in addition to a control plane that works to ensure the running state matches the described state. This feature makes it possible to realize the idea of ensuring no divergence from a described state in git.

So is Kubernetes required for implementing GitOps? No, but it greatly helps facilitate its implementation since the desired/declarative state engine is already implemented as a core concept. Kubernetes can even be used to control the state of workloads that are not even running in Kubernetes. Kelsey Hightower presented a demo for DevOps Days 2020 where he created a Kubernetes controller for a serverless runtime outside of the cluster. His controller then managed the state of the target serverless platform ensuring it represented the state he described in his git repo.

There are other projects that are building on the same concepts such as Crossplane which provides a Kubernetes controller for managing cloud infrastructure outside of Kubernetes. This allows you to describe MySql instances or cloud storage like GCS or S3 using a similar domain-specific language (DSL) as managing application deployments.

So the target resource doesn’t need to be in Kubernetes but its control plane can be used as a way of managing your application workloads and possibly infrastructure.

Benefits of GitOps

Well, the benefits primarily lie in how it is implemented, so first, we will look at the basic implementation.

The first part of the implementation is to have a way to describe your “desired” state, or what you want to make a reality in your target environment, the DSL. This is primarily being accomplished with YAML and JSON. It's used to declare/provide the state, (or description of), the resources you want that is committed to a git repository. Next, the state must be applied to the environment. This is done with a combination of Kubernetes Custom Resouces Definitions (CRD’s) with Controllers and/or Operators. The operator is the actor in applying the described state to the runtime environment by pulling the DSL from git and “deploying” it to the environment.

This process can now provide a single source of truth for the desired state of all infrastructure and apps along with a controlled process that continually reconciles that desired state with the actual runtime. It also provides:

  • what can be described can be validated and automated

  • observability

    • single source of truth, allowing you to see in git what should be in place.

    • collaborate via merge requests and reviews on all changes.

    • using MR’s to control an approval process for changes to an environment.

  • auditable compliance of all changes to the cluster.

    • seeing in git what should be live and the history of changes.

  • consistent rollback process

  • disaster recovery, through simple cluster application state restoration/deployment with new cluster creation.

  • developer-centric process

  • a fit for the Kubernetes declarative manifest model and “desired state” engine

  • secure deployment of workload to the cluster using a “pull” model rather than a “push” model. Meaning that git and CI don’t need to have access to the cluster. Rather the cluster pulls in its state from git

Wrap-up

GitOps has a set of principles that require people processes and tools to fully implement. The core principles of GitOps are:

  • Build declarative configurations for defining all workloads.

  • Require all modifications to use the git review process - i.e. “kubectl” should never be used directly

  • An operator in the cluster should drive the observed cluster state to the desired state declared in git.

Can you rely 100% on just this form of GitOps? I would have to say no. When you start from scratch there is some bootstrapping necessary for the environment before GitOps can take over. For Kubernetes, you need to have the CRD’s applied and the controller/operator installed into a cluster that needed to be created as well. This minimal bootstrapping is necessary before GitOps for application pulls can be implemented. Tools like Terraform fulfill portions of the GitOps principles by having the declared configuration managed in git along with its ability for managing state. I believe this is where you can have a great match between Terraform and the various GitOps solutions.

This however begins another entire topic so I will end here. I may produce follow-ups to this is if there is interest in further exploring what GitOps is and how to implement it. So please provide comments/questions and feedback.

Thursday, July 2, 2020

A change in technology and direction

At one point I was very focused on eCommerce development and the IBM WebSphere Commerce platform.  Over the last few years, my focus and direction have taken a turn toward more Open Source solutions and technology.  I am now deep into projects related to cloud-native architecture and development as well as DevOps.  The landscape of late has been drastically ever-changing so I am now doing all I can to remain in touch and informed with what is happening around us in the world of software development as software continues to "eat" the world.

There has been a lot of focus on Kubernetes, Service Meshes, Continuous Delivery, and the big clouds, Google Cloud Platform, Azure, and AWS.  In addition to Agile and DevOps practices.  This will be resulting in a change for my posts going forward to more general development practices and new technology.   

Private Open Source Software?

Many “open source” projects are benefitting in ways more community-driven than technical or revenue-based. This is largely due to the practices enjoyed by these projects, such as transparency and inclusiveness that fosters creativity and participation from developers of differing points of view and skillsets. Meaning that anyone that needs a bug fixed or a new feature can provide a patch to the code and documentation for review and acceptance by the maintainers of the project. This can mean getting those extra needs you have in the base product faster than waiting for the regular committers to schedule the task and get to it sometime in the future.

I look at that and see no reason why an organization shouldn’t be able to have those same benefits for the code they have in-house that they are not wanting to “open source”. I have found that there is a community that is dedicated to this very concept, inner source. Inner-source is a culture and process movement to facilitate collaboration among development teams. Similar to open source, although it only pertains to private repositories within the organization. As we see open source becoming more prevalent in our industry many organizations are taking notice of the collaborative process involved. It isn’t possible for all software written to be open source for all organizations, but why should those organizations and private projects not benefit from open source concepts.
  • Facilitate transparency amongst all development teams in the organization.
  • Reduce the dependency issues on other teams by submitting MR’s for review to the dependent projects as needed.
  • Breaking down silos
  • Accelerate cross-project learning
  • Improved documentation
  • Process standardization
  • Foster innovation
  • Release passion and creativity
By promoting developers to provide code submissions for any of the repositories in the company, challenges could arise.
  • Conflict with “day” job as developers may decide to spend more time on other projects than those their team owns.
  • Silos may not fully break down
There is a lot that has been written on this topic including a site dedicated to the concepts, https://innersourcecommons.org/. Many of the examples however target very large organizations with large global development staff. As a result, many of the writings and talks tend to focus on the processes needed to govern the inner source initiative as they would an “open source” project. I wonder though if this is needed for organizations of all sizes. Many “open source” projects begin and grow organically with an assumed policy. So why can’t there be a policy of inner source along with some standards for all projects?
From the reading and talks, it seems there are a set of basic needs a project must fulfill to facilitate inner source.
  • Defined Ownership within projects
  • Good Review process
  • Good Documentation
  • Automated testing
  • Coding Standards
  • Code Quality Checks
The list of needs isn’t too large and isn’t necessarily something that shouldn’t be required of all projects regardless of inner source. So it seems to me that a company could begin to benefit from inner source concepts with a small subset of the requirements (in addition to the above needs).
  • A policy of allowing developers to submit MR’s to other projects (especially those their products rely on).
  • A manner of introducing the different projects in the company to the developers.
  • A way to foster participation
It is possible that through the process of fostering inner source within the company there will be projects that emerge that may be of value for others. This would begin a conversation of open source based on a few questions;
  • have something of value to share?
  • is there a strategic decision policy in place?
  • are you ready for doing open source? (capability, culture, governing, etc)
In this situation, “inner source” may be a step towards open-source by providing an opportunity to work out the overall governing policies that may be necessary for “open source”.
I would like to hear from everyone in the comments their thoughts on this topic.

Thursday, June 2, 2016

IBM WebSphere Commerce Amplify Notes

Having recently attended the 2016 Amplify conference for Commerce; thought I would post a couple notes of interest from conversations and/or sessions.


  • WebSphere Commerce V8
  • Headless Commerce
  • Commerce On the Cloud

Wednesday, May 18, 2016

IBM Commerce Conference (Amplify) 2016

Here at the IBM WebSphere Commerce conference this year.  Seems the main theme of announcements this year are around the new marketing on the cloud initiatives.
Couple notes:

  • The tooling for the new marketing and merchandising products and the new WebSphere Commerce Management Center have had a face lift in an attempt to provide a similar experience across the tools.  They are still not fully integrated, which would provide the business a single tool for completing their tasks.  At least they are attempting to reduce the learning curve.
  • The somewhat recently released WebSphere Commerce v8.
more to come...

Thursday, April 30, 2015

WC Attribute Dictionary Model vs Classic Attribute Model

I was recently asked which direction to move into using.   So I thought I would put out some thoughts on the subject.
The classic attribute model requires the attributes be defined for every catalog entry.  A better approach is to define a common set of attributes that can be reused by multiple products, the attribute dictionary.  The WC Attribute Dictionary provides for
  • Centralized management of the attributes providing controlled consistency in the attribute names and values across the catalog entries.
  • Reduced data management by eliminating duplicate data for attributes.
  • In later FEP’s (5-8) there is additional functionality added to the attributes such as facetable attributes and merchandising attributes.  These allow for enhanced search result management, additional functionality in marketing activity catalog entry recommendations, and enhanced category browse through use of facets.

 So my recommendation is to move toward utilizing the WC Attribute Dictionary. 

Thursday, June 5, 2014

Use Intern for JavaScript unit testing

With so much of JavaScript in use on and off the web and a growing library where I work, there is a need for good unit testing.  I have spent time looking at many of the different tools, though they have not been quite adequate for all of our needs.  I have recently been pointed towards Intern, a new project by SitePen Labs.