Stelligent Amazon Pollycast
Voiced by Amazon Polly

Value stream mapping (VSM) is a technique for modeling process workflows. In software development, one of the key reasons for creating a VSM is determining the bottlenecks slowing down the delivery of value to end users. While VSM is used in many different industries (mostly related to physical goods), the topic of this post is how to create a VSM assuming familiarity with software delivery but not in value stream mapping.

Some organizations skip the step of mapping their current state. The most common reason is that they believe they clearly understand where their bottlenecks exist (e.g. “I already know it’s bad so why do we need to go through an exercise telling us what we already know”). Another reason is that – while often not admitted – they feel they don’t fully grasp how to create a value stream map or feel like it’s considerable effort without the commensurate return on investment. Another common complaint is that while the problems exist in other systems or organizations, their team might be working on a greenfield system so it’s unnecessary – in their opinion – to know the current state of the processes for other teams.

The common thread with this reluctance usually comes down to cognitive bias that their own view accurately depicts the overall view of the entire value stream. What’s more, when going through a transformative effort that requires some initial investment, you’ll need to be capable of  providing a consistent, validated depiction before and after the improvements applied in order to demonstrate the impact of the transformation.

In using VSM across your teams, you can reduce the time arguing over “facts” (i.e. “others” perspective on the value stream). You don’t need to be an expert in value stream mapping to be effective. Following Pareto’s 80/20 principle is an effective guide for focusing on the 20% that matters. Moreover, creating a mock pipeline better models software delivery value streams than a generic VSM.

In this post, you’ll learn the steps in creating a mock deployment pipeline using AWS CodePipeline and inline AWS CloudFormation. This mock deployment pipeline will represent a VSM using an open source tool we have (creatively!) called mock-pipeline. By utilizing CloudFormation, your VSM is defined as versioned code making it easy to iterate rapidly on changes to your VSM based upon feedback from other team members.


Broadly speaking, the idea of DevOps is about getting different functions (often, these are different teams at first) to work together toward common goals of accelerating speed while increasing stability (i.e. faster with higher quality). These accelerants typically get implemented through organizational, process, culture, and tooling improvements. In order to improve, you must know where you currently are. Otherwise, you might be improving the wrong things. It’s like trying to improve your health without basic metrics like your blood pressure, blood cholesterol, fasting blood glucose, or body mass index. The purpose of value stream mapping is to get basic knowledge of the current state so you know what and how to fix it. Moreover, if you can get a real-time view into your value stream and its key metrics (deployment frequency, lead time for changes, MTTR, and change failure rate), you’re in a much better position to effect change.

There are two primary approaches for measuring the lead time – either from origination of an idea until it gets delivered to users or from the time an engineer commits code to version control until it’s delivered to end users. Since it’s more consistent to measure from code commit to production, we’re choosing this approach.

Value Stream Mapping Terms

There’s some conflict among industry experts on the definitions of basic Lean terms so, unless otherwise noted, I’m using the definitions from the excellent book:Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation. The most important thing is to use consistent terminology among team members.

  • Process Time – “Typically expressed in minutes or hours, process time represents the hands-on “touch time” to do the work. It also includes “talk time” that may be regularly required to clarify or obtain additional information related to a task (including meetings), as well as “read and think time” if the process involves review or analysis [Source].”
  • Lead time (LT) – “also referred to as throughput time, response time, and turnaround time—is the elapsed time from the moment work is made available to an individual, work team, or department until it has been completed and made available to the next person or team in the value stream. Lead time is often expressed in hours, days, or even weeks or months [Source].” There are metrics within lead time (such as: work in process (WIP), batch size, queue time, and wait time) that help diagnose the source of bottlenecks in the process. Note that queue time (the time it takes for a person, signal, or thing to be attended to – which includes the time before work that adds value to a product is performed) takes about 90 percent of total lead time in most production organizations [1]
  • Percent Complete and Accurate (%C&A) – “obtained by asking downstream customers what percentage of the time they receive work that’s “usable as is,” meaning that they can do their work without having to correct the information that was provided, add missing information that should have been supplied, or clarify information that should have and could have been clearer” [Source].

In my post, Measuring DevOps Success with Four Key Metrics, I summarized the four software delivery metrics as described in the book, Accelerate:

  • Deployment frequency – the number of times in which software is deployed to production or to an app store. This also provides a proxy for batch size.
  • Lead time for changes – “the time it takes to go from code committed to code successfully running in production”. This is a key number you can obtain by VSM.
  • Time to restore service – the average time it takes to restore service.
  • Change failure rate – how often deployment failures occur in production that require immediate remedy (particularly, rollbacks). This measure has a strong correlation to the percentage complete and accurate (i.e. “rework”).

The act of value stream mapping while considering the four key DevOps metrics will help focus the effort on measuring and then improving speed and stability. You can think of value stream mapping as the technique used to determine the four DevOps metrics.

Mock Pipeline

Mock Pipeline is an open source tool for modeling value stream maps regardless of your tech stack, cloud provider, or data center. With Mock Pipeline, you can define your value stream map as code in order to visualize all the steps in your commit to production lifecycle. While it uses AWS services/tools such as AWS CloudFormation and AWS CodePipeline, it can model any technology platform.

Fork the Mock Pipeline Repo

These instructions assume you’re using AWS Cloud9. Adapt the instructions if you’re using a different IDE.

If you don’t have a GitHub account, create a free one by going to GitHub Signup. Make a note of the userid you created (will refer to as YOURGITHUBUSERID)

Login to your GitHub account.

Go to the mock-pipeline GitHub repository.

Click the Fork button. A message will display “Where do you want to fork this to?“.

Click on the button that displays Fork to YOURGITHUBUSERID.

From your Cloud 9 terminal, clone the newly forked repo (replacing YOURGITHUBUSERID in the example):

git clone
cd mock-pipeline
sudo su
sudo curl -s | sh

Note: The mock pipeline tool uses an open source framework called mu which generates CloudFormation templates that provision AWS resources.

Deploy Value Stream as a Pipeline

Make modifications to your local mu.yml to change the CodePipeline action names. For example, precede several of the action names with your initials or first name. Your doing this to ensure the changes get deployed.

Save the changes locally and commit them to your remote repository.

git commit -am "initial value stream" && git push

Run the mu pipeline upsert:

mu pipeline up -t GITHUBTOKEN

Your GITHUBTOKEN will look something like this: 2bdg4jdreaacc7gh7809543d4hg90EXAMPLE. To get or generate a token go to GitHub’s Token Settings.

After a few of the CloudFormation stacks have launched, go to theCodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.

Redeploy Changes

In this section, you will modify the action names and the order. In particular, you want to alter the model to change the order and name of InfrastructureAnalysis and ProvisionEnvironment actions so that the static analysis runs prior to provisioning the environments. When the two are shown running side by side, it represents actions running in parallel. To do this, you need to terminate the current pipeline. First, you need to get a list of service pipelines managed by mu by running this command:

mu pipeline list

Then, using the proper service_name obtained from the list command and include in the following command in order to terminate the pipeline.

mu pipeline terminate [<service_name>]

Wait several minutes before CloudFormation templates have terminated.

Now, you can make modifications to your local mu.yml to change the CodePipeline action order and names. An example is shown in the image below.

Once you’ve made changes, commit then to your remote repository.

git commit -am "modify action order in acceptance stage" && git push

Run the mu pipeline upsert again.

mu pipeline up -t GITHUBTOKEN

After a few of the CloudFormation stacks have launched, once again, go to the CodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.


You can use value stream mapping to obtain the four key software delivery metrics but just like with your health, knowing these metrics is only part of the battle. The other and crucial part is in improving them by incorporating capabilities into daily practices. In the Accelerate book, the authors describe 24 capabilities listed below on which to focus improvements based on the metrics.

Continuous Delivery Capabilities

  • Version control
  • Deployment automation
  • Continuous integration
  • Trunk-based development
  • Test automation
  • Test data management
  • Shift left on security (DevSecOps)
  • Continuous delivery (CD)

Architecture Capabilities

  • Loosely coupled architecture
  • Empowered teams

Product and Process Capabilities

  • Customer feedback
  • Value stream
  • Working in small batches
  • Team experimentation

Lean Management and Monitoring Capabilities

  • Change approval processes
  • Monitoring
  • Proactive notification
  • WIP limits
  • Visualizing work

Cultural Capabilities

  • Westrum organizational culture
  • Supporting learning
  • Collaboration among teams
  • Job satisfaction
  • Transformational leadership

For example, continuous delivery predicts lower change fail rates, less time spent on rework or unplanned work, including break/fix work, emergency software deployments and patches, etc. Moreover, keeping system and application configuration in version control was more highly correlated with software delivery performance than keeping application code in version control and teams using branches that live a short amount of time (integration times less than a day) combined with short merging and integration periods (less than a day) do better in terms of software delivery performance than teams using longer-lived branches. [Source] In other words, by incorporating or improving one or more of these capabilities, you will likely improve one or more of the four metrics which is correlated with better outcomes based on the data analysis.



In this post, we covered how to use a managed deployment pipeline workflow service (i.e. CodePipeline) to efficiently model a value stream map in order to assess the current state and accelerate speed and confidence in delivering software to end users in production.