Stelligent

Value Stream Mapping with Mock Pipeline

Value stream mapping (VSM) is a technique for modeling process workflows. In software development, one of the key reasons for creating a VSM is determining the bottlenecks slowing down the delivery of value to end users. While VSM is used in many different industries (mostly related to physical goods), the topic of this post is how to create a VSM assuming familiarity with software delivery but not in value stream mapping.

Some organizations skip the step of mapping their current state. The most common reason is that they believe they clearly understand where their bottlenecks exist (e.g. “I already know it’s bad so why do we need to go through an exercise telling us what we already know”). Another reason is that – while often not admitted – they feel they don’t fully grasp how to create a value stream map or feel like it’s considerable effort without the commensurate return on investment. Another common complaint is that while the problems exist in other systems or organizations, their team might be working on a greenfield system so it’s unnecessary – in their opinion – to know the current state of the processes for other teams.

The common thread with this reluctance usually comes down to cognitive bias that their own view accurately depicts the overall view of the entire value stream. What’s more, when going through a transformative effort that requires some initial investment, you’ll need to be capable of  providing a consistent, validated depiction before and after the improvements applied in order to demonstrate the impact of the transformation.

In using VSM across your teams, you can reduce the time arguing over “facts” (i.e. “others” perspective on the value stream). You don’t need to be an expert in value stream mapping to be effective. Following Pareto’s 80/20 principle is an effective guide for focusing on the 20% that matters. Moreover, creating a mock pipeline better models software delivery value streams than a generic VSM.

In this post, you’ll learn the steps in creating a mock deployment pipeline using AWS CodePipeline and inline AWS CloudFormation. This mock deployment pipeline will represent a VSM using an open source tool we have (creatively!) called mock-pipeline. By utilizing CloudFormation, your VSM is defined as versioned code making it easy to iterate rapidly on changes to your VSM based upon feedback from other team members.

DevOps

Broadly speaking, the idea of DevOps is about getting different functions (often, these are different teams at first) to work together toward common goals of accelerating speed while increasing stability (i.e. faster with higher quality). These accelerants typically get implemented through organizational, process, culture, and tooling improvements. In order to improve, you must know where you currently are. Otherwise, you might be improving the wrong things. It’s like trying to improve your health without basic metrics like your blood pressure, blood cholesterol, fasting blood glucose, or body mass index. The purpose of value stream mapping is to get basic knowledge of the current state so you know what and how to fix it. Moreover, if you can get a real-time view into your value stream and its key metrics (deployment frequency, lead time for changes, MTTR, and change failure rate), you’re in a much better position to effect change.

There are two primary approaches for measuring the lead time – either from origination of an idea until it gets delivered to users or from the time an engineer commits code to version control until it’s delivered to end users. Since it’s more consistent to measure from code commit to production, we’re choosing this approach.

Value Stream Mapping Terms

There’s some conflict among industry experts on the definitions of basic Lean terms so, unless otherwise noted, I’m using the definitions from the excellent book:Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation. The most important thing is to use consistent terminology among team members.

In my post, Measuring DevOps Success with Four Key Metrics, I summarized the four software delivery metrics as described in the book, Accelerate:

The act of value stream mapping while considering the four key DevOps metrics will help focus the effort on measuring and then improving speed and stability. You can think of value stream mapping as the technique used to determine the four DevOps metrics.

Mock Pipeline

Mock Pipeline is an open source tool for modeling value stream maps regardless of your tech stack, cloud provider, or data center. With Mock Pipeline, you can define your value stream map as code in order to visualize all the steps in your commit to production lifecycle. While it uses AWS services/tools such as AWS CloudFormation and AWS CodePipeline, it can model any technology platform.

Fork the Mock Pipeline Repo

These instructions assume you’re using AWS Cloud9. Adapt the instructions if you’re using a different IDE.

If you don’t have a GitHub account, create a free one by going to GitHub Signup. Make a note of the userid you created (will refer to as YOURGITHUBUSERID)

Login to your GitHub account.

Go to the mock-pipeline GitHub repository.

Click the Fork button. A message will display “Where do you want to fork this to?“.

Click on the button that displays Fork to YOURGITHUBUSERID.

From your Cloud 9 terminal, clone the newly forked repo (replacing YOURGITHUBUSERID in the example):

git clone https://github.com/YOURGITHUBUSERID/mock-pipeline.git
cd mock-pipeline
sudo su
sudo curl -s https://getmu.io/install.sh | sh
exit

Note: The mock pipeline tool uses an open source framework called mu which generates CloudFormation templates that provision AWS resources.

Deploy Value Stream as a Pipeline

Make modifications to your local mu.yml to change the CodePipeline action names. For example, precede several of the action names with your initials or first name. Your doing this to ensure the changes get deployed.

Save the changes locally and commit them to your remote repository.

git commit -am "initial value stream" && git push

Run the mu pipeline upsert:

mu pipeline up -t GITHUBTOKEN

Your GITHUBTOKEN will look something like this: 2bdg4jdreaacc7gh7809543d4hg90EXAMPLE. To get or generate a token go to GitHub’s Token Settings.

After a few of the CloudFormation stacks have launched, go to theCodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.

Redeploy Changes

In this section, you will modify the action names and the order. In particular, you want to alter the model to change the order and name of InfrastructureAnalysis and ProvisionEnvironment actions so that the static analysis runs prior to provisioning the environments. When the two are shown running side by side, it represents actions running in parallel. To do this, you need to terminate the current pipeline. First, you need to get a list of service pipelines managed by mu by running this command:

mu pipeline list

Then, using the proper service_name obtained from the list command and include in the following command in order to terminate the pipeline.

mu pipeline terminate [<service_name>]

Wait several minutes before CloudFormation templates have terminated.

Now, you can make modifications to your local mu.yml to change the CodePipeline action order and names. An example is shown in the image below.

Once you’ve made changes, commit then to your remote repository.

git commit -am "modify action order in acceptance stage" && git push

Run the mu pipeline upsert again.

mu pipeline up -t GITHUBTOKEN

After a few of the CloudFormation stacks have launched, once again, go to the CodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.

Capabilities

You can use value stream mapping to obtain the four key software delivery metrics but just like with your health, knowing these metrics is only part of the battle. The other and crucial part is in improving them by incorporating capabilities into daily practices. In the Accelerate book, the authors describe 24 capabilities listed below on which to focus improvements based on the metrics.

Continuous Delivery Capabilities

Architecture Capabilities

Product and Process Capabilities

Lean Management and Monitoring Capabilities

Cultural Capabilities

For example, continuous delivery predicts lower change fail rates, less time spent on rework or unplanned work, including break/fix work, emergency software deployments and patches, etc. Moreover, keeping system and application configuration in version control was more highly correlated with software delivery performance than keeping application code in version control and teams using branches that live a short amount of time (integration times less than a day) combined with short merging and integration periods (less than a day) do better in terms of software delivery performance than teams using longer-lived branches. [Source] In other words, by incorporating or improving one or more of these capabilities, you will likely improve one or more of the four metrics which is correlated with better outcomes based on the data analysis.

Resources

Summary

In this post, we covered how to use a managed deployment pipeline workflow service (i.e. CodePipeline) to efficiently model a value stream map in order to assess the current state and accelerate speed and confidence in delivering software to end users in production.

Stelligent Amazon Pollycast