The Business Value of Short Branches

Context: I recently wrote this for an internal email. I thought I would share it with the rest of my friends. :)

In the world of software development, we can’t say it’s done until the code is in production, working, and providing the company with value. It doesn’t matter how pretty my code is, if its sitting in a branch. 

Show me the Money

One way to understand the reality that shipping early and often provides more value to the company is to take a look back at Finance 101. Now if you managed to avoid this class, you may have missed out on a very cool concept, the “Time Value of Money”. This concept states, among other things, that $100 today is worth more than $100 tomorrow

In the book ‘Software by Numbers’ the authors discuss just this in quite some detail. If you can get a hold of a copy, even reading chapter 2 is quite eye opening.

But let’s go over a simple example to see what we are talking about. One of the equations that came out of the Time Value of Money is the “Present Value”. The present value (PV) of a sum of money is equal to the future value (FV) multiplied by one plus the interest rate (i) to the power of the number of payments (n) or PV=FV*(1+i)-n 

For example, lets say we have the awesome new feature X. Feature X is going to bring the company a value of $10,000 once it ships, and if you will humor me, the interest rate of this investment is 10% (more than likely this number would be based on another financial term IRR but lets leave that alone for now). The last part is the ‘n’ and this is the one we are going to play with. In terms of shipping software the value of ‘n’ is going to be the number of times we ship. 

So if I said shipping feature X would give the company a business value of $10,000. And we shipped it all at the end the PV of that feature would be PV=$10,000*(1+0.1)-2 or $8264.46. The ‘n’ is a 2 in this case to represent the cost of waiting two releases (the amount the next example is going to use). But what if we broke this up into two releases, each one providing some value but not the whole value. Furthermore, let’s say the first release doesn’t provide a lot of value, so $3,000 and $7,000. Using those numbers then then the first release has a PV of PV=$3,000*(1+0.1)-1  or 2727.27 and the second release has a PV of PV=$7,000*(1+0.1)-2 or $5785.12. If you add that up $2727.27 + 5785.12 = $8512.39. That’s a gain in business value of $247.93, just because we shipped multiple releases. 

Not only that, but there is another part of this story that we haven’t discussed. Let’s say we ship the first part and learn that if we make a few tweaks to the project we could net more than $7,000! If so, then the return for the company goes up even higher further increasing the business value of the project over the original $8,264.46. Conversely, we get the added benefit of the other side of the coin. If after shipping the first iteration we notice that the business has changed and we need to move in another direction, then we have only cost the company for the first iteration, shipped it, got some value and can now redirect onto another, more lucrative, project.

So, enough with the fancy math. How can we actually take what we have learned and apply it to our day to day jobs? One of the simplest ways to detect this is to look at how old your branch is from master. Have you been working on a branch for a week, two weeks, a month? That might be too long, or you may be trying to push too much into one branch. Can you break the work up so that you are shipping code sooner? Teams that practice Continuous Deployment will often ship the database changes way before the application changes. Then they might deploy the new code behind a feature toggle so that stakeholders can review, and then finally they will make it active to the user base. Each one of those is a push towards a specific feature and each one provides value to the business in terms of feedback on the progress, on how the systems will react to the changes, etc.

So think about how you can make smaller branches that will push into master sooner. As a team member, once a PR hits Github take a moment to help the teammate out and give it a code review, the quicker we can give feedback to our colleagues the faster we can all learn and improve the way we work. In addition we don’t want WIP to hang out too long either. A topic for another post as this one is quite long already.

Posted in Uncategorized | 2 Comments

Chocolatey Newsletter

Chocolatey imageChocolatey has some big changes coming in the next few months, so we’ve started a newsletter to keep everyone informed of what’s coming. The folks who are signed up for the newsletter will hear about the latest and greatest changes coming for Chocolatey first, plus they will know when the Kickstarter (Yes! Big changes are coming!) kicks off before anyone else. Sign up for the newsletter now to learn about all the exciting things coming down the pipe for Chocolatey!

Posted in chocolatey | Tagged | 2 Comments

Our new book and my personal journey toward REST and Hypermedia

It’s hard to believe, but it has been 6 months since our new book, “Designing Evolvable Web APIs with ASP.NET” shipped! So far the feedback that we’ve received has been really positive and we’re excited to see the momentum.


In the first part of the book we focus on educating about the fundamentals of the web architecture and HTTP, and APIs. Next we focus on the designing of a hypermedia API, and a TDD/BDD driven implementation of a CollectionJson API using ASP.NET Web API. We also give you several coding patterns that you can use to get started building hypermedia APIs.

In the second part of the book, we focus on ASP.NET Web API itself and the underlying architecture. We do this in the context of the first part, with the explicit goal of helping you further toward building out an evolvable API. We delve deeply into the nuts and bolts of Web API including hosting, routing and model binding. We cover comprehensively how to secure your API, and the security internals. And we cover other topics like TDD and using Inversion of Control.

You can buy it online now here. We also have a Github repo with all the code, which includes a working hypermedia API.

The people that made this book a reality.

This book has been a collaborative effort with four other amazing folks that I’ve had the privilege to work with. It combines the knowledge of the Web API team itself as well as key advisors who were with us every step of the way as we built it. It was a collaborative effort across 4 time zones and countries:

  • Howard Dierking, my co-conspirator on the Web API team. An amazing individual who is helping educate the world on REST and hypermedia through his Pluralsight work.
  • Darrel Miller, a long time proponent of REST and hypermedia in the .NET world before many of us at Microsoft had a clue of what that means. Darrel has been building real hypermedia systems for a long time. Darrel was one of our first handfuls of advisors on Web API.
  • Pedro Felix, an expert on Web API security and the ASP.NET Web API architecture. His comprehensive contribution in the book on the security aspects of Web API is unparalleled in the .NET world.
  • Pablo Cibraro, former CTO of Tellago and a consultant who has implemented many API solutions. Pablo is an expert in Agile and TDD, and he wrote some deep chapters testability and IOC.

We also had a fantastic set of reviewers and advisors.

Why write this book?

This book is part of a very personal journey, a  journey that started several years ago when I joined the WCF team at Microsoft. At that time I had the pop culture understanding of REST, that is a lightweight API exposed over HTTP that doesn’t use SOAP. I joined the team to help build a better experience in the platform for these types of APIs. Little did I know the journey that awaited me, as I would delve deeper and deeper into the REST community.

It was a journey of learning.  I was fortunate to have wonderful teachers who had already been treading the path, chief of which (ordered last name first) were: Jan Algermissen, Mike Amundsen Alan Dean, Mike Kelly, Sebastien Lambla, Darrel Miller, Henrik Nielsen, Ian Robinson, Aaron Skonnard, and Jim Webber. I am deeply indebted for their help. The is book was built really on the shoulders of these giants!

During that period, I learned about HTTP and REST, that REST is much more than building an RPC JSON API, and about how to make APIs more robust. This included learning about fundamental aspects of the web architecture like caching, ETAGS and content negotiation. The learning also included the hidden jewel of REST APIs, hypermedia, and new media types like HAL, CollectionJson and Siren that were designed specifically for hypermedia-based systems.

It was clear looking at our existing Microsoft frameworks that they did need meet the requirements for building APIs that really leveraged HTTP and took advantage of the web architecture. Looking within the .NET OSS community, solutions like Open Rasta, however were explicitly designed with this in mind. There were also a ton of other options outside the .NET world in Java, Ruby, Python and PHP, and more recently in node.js.

After soaking this all in, my team at Microsoft, and a team of fantastic advisors from the community, worked together to create a new framework that was designed from the get-go to fully embrace HTTP, to enable, but not force building a RESTful system.

As part of this we had an explicit goal from day one to ensure this framework would also enable building a fully hypermedia based system. Another explicit goal was to make the framework easier to test, and to allow it to work well with agile development. This framework ultimately became what you now know as ASP.NET Web API.

Although ASP.NET Web API had the foundations in place to enable such systems, you have work to do. I am not going to say that it naturally leads you to build a hypermedia system. This was not an accident. We deliberately did not want to force people down a specific path, we wanted to make sure though if you wanted to build such a system, it wouldn’t fight you.

We saw a lot of folks jump on this, even internally, folks like the Office Lync Team that built probably one of the largest pure hypermedia APIs in existence using Web API.  Many of these early adopters had the advantage of working directly with our team. They worked with me, and Henrik Nielsen (My Web API architect, and one of the creators of HTTP) to help guide them down the path. We did a lot of educating on HTTP, API fundamentals and what hypermedia is and how to build systems that support it.

On the outside we also saw a lot of energy. Folks like our advisory board jumped in and started doing the same in the wild. They built real systems, and they guided customers.

All of this work (including shipping Web API) definitely succeeded in help raising the bar around modern API development and hypermedia in the industry in general. More and more folks including some of the largest companies started coming out of the woodwork and saying, we want to build these types of systems.

However, many folks, in particular in the .NET community, have not crossed this chasm yet and do not understand how or why to build such systems. In particular building a hypermedia-based system is a sticking point. Many are scared by the term and don’t understand what it is. Others dismiss it as purely academic. Even those that do grasp the concepts often do not know where to start to implement them. This last point makes total sense, taking into consideration the points I mentioned earlier that ASP.NET Web API does not lead you toward building such APIs, it enables it.

With this book we wanted to change that. We wanted to bring these API techniques to the masses. We wanted to show you why and how to actually build these APIs with ASP.NET Web API. We wanted to show you how to make the underlying architecture of the framework to work with you in achieving that goal.

I believe we’ve done that. We look forward to your experiences with the book as you embark on API development with ASP.NET.

Posted in ASP.NET, ASP.NET Web API, Hypermedia, patterns | 3 Comments

Quick easy steps to grab your build artifacts from Visual Studio Online

If you haven’t checked out Visual Studio Online yet, you are missing a lot! I’m in the midst of producing a video series on how to use Visual Studio Online for WintellectNOW. Don’t have a WintellectNOW account? No problem. Enter my promo code PETERSEN-14 and get 2 weeks of unlimited free access.

If you are using Visual Studio Online and have begun to use the hosted build controller, you may be wondering how to grab your build artifacts. Deploying to targets such as an Azure Website is simple. However, there are times when all you want are the raw build artififacts (dll’s, exe’s, html, css and js files, etc). How do you get those?

As it turns out, it is pretty simple!

One of nice new features with Visual Studio Online is the new REST Api.

I’ll leave it to you to sort through the Api reference. The one I want to key on is the Api call to get the list of builds:

GET: https://{account}

The following is an example

         "sourceGetVersion":"LG:(no branch):b792b3c303982b6be3e6105c3e587307fd35381a",
            "displayName":"Elastic Build ({account})",
            "uniqueName":"LOCAL AUTHORITY\\Elastic Build ({account})",
            "name":"Hosted Build Controller",
                  "displayName":"John V. Petersen",
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  },
      {  }

The item you want to pay attention to is the drop node’s downloadUrl Property:


If you are already authenticated, entering the downloadUrl into a browser will result in a downloaded zip file that contains your deployment artifacts.

There are alternative ways to authorize – which you can find documented here.

That’s it – Super easy to grab your built application artifacts!

Posted in Uncategorized, Visual Studio Online | Leave a comment

Puppet: Getting Started On Windows

Now that we’ve talked a little about Puppet. Let’s see how easy it is to get started.

Install Puppet

PuppetLet’s get Puppet Installed. There are two ways to do that:

  1. With Chocolatey: Open an administrative/elevated command shell and type:
    choco install puppet
  2. Download and install Puppet manually –

Run Puppet

  • Let’s make pasting into a console window work with Control + V (like it should):
    choco install wincommandpaste
  • If you have a cmd.exe command shell open, (and chocolatey installed) type:
  • The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!).
  • If you are not able to use RefreshEnv (or ‘where puppet’ evaluates to not found), you need to restart your CLI (command line interface) session or open an administrative/elevated command prompt (because you installed manually).
  • Now let’s find out about the users on the system. Type:
    puppet resource user
  • Output should look similar to a few of these:
    user { 'Administrator':
      ensure  => 'present',
      comment => 'Built-in account for administering the computer/domain',
      groups  => ['Administrators'],
      uid     => 'S-1-5-21-some-numbers-yo-500',
  • Let’s create a user:
    puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }"
  • Relevant output should look like:
    Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created
  • Run the ‘puppet resource user’ command again. Note the user we created is there!
  • Let’s clean up after ourselves and remove that user we just created:
    puppet apply -e "user {'bobbytables_123': ensure => absent, }"
  • Relevant output should look like:
    Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed
  • Run the ‘puppet resource user’ command one last time. Note we just removed a user!


You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet.

Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

Posted in chocolatey, howto, puppet | Tagged , | 1 Comment