A short note on build tasks

Surely, I’m behind the curve here but I’ve been thinking about the typical build process here at work. For a long time I’ve been operating off of the classic model from back in my NAnt days where it was all about Build -> Unit Test -> Integration Test. Maybe if you are feeling fancy you have some different steps for running database migrations or running other code analysis tools.

However, I’ve been working a lot with the concept of microservices in my side project. Trying to figure out where they break down and where they shine. Also, just trying to figure out the practical issues with running them. There is so much going on in our domain right now that trying to keep up can seem like a tidal wave. So if you are new to some of these concepts, well – so am I. :) The following is a brain dump of ideas for managing all of this crazy.


First, I’ve started adding a few new steps to my build process. First off, is a reminder of how important a ‘packaging’ step is. Rob and I built this into UppercuT but its really coming back to me how important this is. I think its critical to realize that your default build output may not be enough packaging. In Visual Studio (well for me anyways) there is a very common behavior to just grab the ‘bin’ output and run. Because I need to package up my source better, I am now running my build step, then moving/copying all of the deployable content to something like a ‘build_output’ folder.

For those of you doing classic Visual Studio / C# / .Net development, I strongly recommend that you break out of your IDE for this. I would invite you to look at the power contained in your command line tools, even CMD can be used with great good. From there look at PowerShell. For me its bash, but I really need to look at ZSH.

Now that everything is in the ‘build_output’, my next step has been to run various HTML/ASPX/CSS/JS minification programs (and the litany of┬álilliputian tools arrive) to compress and optimize my application. Next step has been to then package all of this build_output content into a deployable unit. For my .net apps this is a NuGet via OctoPack and for my side project it has been Debian files.


This leads me to my newest build task, push. Push for me takes all of my nice new build output and makes it available to the larger audience. Note that this step could be run by me or by my automated build tooling, but what it does is simply take the assets and for my .Net projects throws them in Octo’s NuGet repo or for my side project uses fpm/deb-s3┬áto generate my own Debian repository so that I can then pull these assets down and deploy them in testing / staging / production and have a consistent experience.

So, nothing really earth shattering here, but I wanted to share my thoughts. Also, I find these kinds of topics hard to find out there in the interwebs. If you have any great articles around the topic please do share in the comments.

Posted in Uncategorized | 1 Comment

Changing roles and focus – but not good bye

I’ve been presented with a great opportunity to work on the NeuronESB Team (www.neuronesb.com). I’ve always wanted to work on a product team – which is a big change from my work over the past 23+ years where my focus was on custom application development and related consulting. I’ve been on the Neuron team now since 1/5 and it has been a great experience.

So…what does this mean as far as my community activities? Candidly, I’ll be cutting that back quite a bit. I will try to get to 1 or 2 shows this year. If you follow me on Twitter, you began to see my travel schedule pick up. We’re not through February yet and I’ve already racked up 26K miles already. Given that trend, it doesn’t leave time for conferences – which is difficult because it means I won’t get to see good friends throughout the year. This is also a good thing as I’ve been speaking at shows, non-stop – for 6 years. Even without this job change, I was likely to cut back on the shows. It’s time for others to step up and share their knowledge with the community.

I’ll still continue to write for CODE Magazine and I will still continue to produce videos for WintellectNOW. I’ll do my best to support the Code Camps near me (Philly, NYC, Central Penn). With all of this travel, I may be in town for a user group and If possible and if you would like to have me, I’d love to speak at your group. I’ll post on Twitter my travel plans as they become known. Most of my contributions are going to be in the legal arena – specifically on open source and intellectual property as they relate to technologists, contracts, etc.

Posted in Uncategorized | 2 Comments

Typescript Support in Atom Editor for Windows

Recently I was trying to get TypeScript support working inside the Atom editor on windows.

In my attempt to get things working I went to the Atom site and found the TypeScript package. Per the documentation I did ‘apm install typescript’. After about 15 seconds it appeared that I was good to go. Sadly this was not the case. When I opened Atom (by typing in atom on the cmd prompt) I would receive this error.

Because I like to follow directions I restarted Atom (again via the CMD prompt). Sadly I received the same error again… WTF.

Well a quick google search for ‘These are now installed. Best you restart atom just this once.’ yielded one result. However, when I clicked on the link I was taken to the github 404 page, seems that link is dead. What to do now? Lucky for me there was a cached version of the page I could look at (thank you google).

Looking through the source file I was able to find the block of code which was throwing this message (seen below)

It appears that both linter and autocomplete-plus are required in order for TypeScript support to work. I assumed these would have been installed by default, but guess not.

I thought I would simply try to install these Atom packages in hopes the error would go away. To accomplish this I ran the following 2 commands

  • apm install linter
  • apm install autocomplete-plus

Once I had both of these packages installed I tried to reopen Atom. To my excitement the TypeScript message was no long present. To ensure my fix worked I decided to edit a .ts file and yup, my stuff recompiled down to js…

Hope this helps,

Till next time,

Posted in Uncategorized | Leave a comment

JavaScript Code Coverage using Karma-Coverage w/ Grunt

As part of our ongoing effort at my client to setup a testing environment for our JavaScript code I wanted to also setup the ability to do code coverage on our files.

To accomplish this I am going to integrate istanbul coverage reporting w/ our karma test runner via the karma-coverage plugin.

** I am going to assume you already have JS tests running w/ Karma and Grunt **

To accomplish this we first need to install the following NPM packages

  • npm install istanbul –save-dev
  • npm install karma-coverage –save-dev

Next thing we need to do is open our karma.conf.js file and make some changes to it.
1) Update the reporters configuration

reporters: ['progress', 'coverage'],

2) Add a preprocessor section to the configuration.

    preprocessors: {
        // source files, that you wanna generate coverage for
        // do not include tests or libraries
        // (these files will be instrumented by Istanbul)
        '**/js/page/**/*.js': ['coverage']

3) Setup the coverage reporter. This is the outputted format of the results.

    coverageReporter: {        
        dir: '../../../grunt/js.coverage/',
        reporters: [
                { type: 'html', subdir: 'report-html' },                
                { type: 'teamcity', subdir: '.', file: 'teamcity.txt' },

In my setup I am doing 2 things.

  • I am placing my coverage files inside my grunt working directory. This means I need to back navigate to the folder.
  • I am outputting to both HTML and teamcity format. You do not need to specify more than one format if you do not want or need to.

3) I added the karma-coverage plugin to the plugin section. When I left this out I would get an error, adding this resolved the missing plugin error.

   plugins: [

After you have made the following changes you should be able to run karma via grunt as you normally would and boom, you have code coverage for you Jasmine JavaScript files.

Till next time,

Posted in Grunt, Jasmine, Testing | Leave a comment

Forcing MVC Model State to invalid for Unit Tests

Unit testing ASP.Net MVC applications is easier than every today. But how do you force ModelState.IsValid to be false in a unit tests?

The simple thought would be to simply create an invalid object and pass that into your action method, but this will not work. Why? Because the validation is down by the MVC pipeline prior to reaching your actual method and you do not have direct access to this.

However, we can fake it by manually adding model errors, thus getting IsValid to return false.

Imagine you have a controller which does something like below

public ActionResult CustomerFeedback(CustomerFeedbackData model)
    if (!ModelState.IsValid)
        return Json(new ResponseModel<CustomerFeedbackData>());


    return Json(new ResponseModel<CustomerFeedbackData>(model));

And you wanted to create a unit test that would exercise the failing of the .IsValid check.

To accomplish this you could manually force a model error like below.

public void CustomerFeedback_When_Model_Is_Not_Valid_Will_Return_Error_State()
    var controller = GetController();  // helper method to construct an instance of the controller
    var customerFeedbackData = new CustomerFeedbackData();

    // force validation error --> this is the magic sauce
    controller.ModelState.AddModelError("FirstName", "First Name is Required");

    var result = (JsonResult) controller.CustomerFeedback(customerFeedbackData);

    var asModel = (ResponseModel<CustomerFeedbackData>) result.Data;


Long story short, you can force errors by adding them to the ModelState instance on the controller.

Till next time,

Posted in MVC | 3 Comments