Fix your code, don’t disable static analysis

Maybe it is my OCD, maybe it is that I would like to think I try to always write clean code, maybe it is something else entirely. But I always cringe when I see people turn off or disable static analysis in their code.

The reason I cringe is because I have to assume that the authors of the static analysis tools (be it ReSharper or Visual Studio or another product) are more knowledgeable in these areas than I am and they better understand why it is bad to so something.

Today I came across this ‘// ReSharper disable once PossibleMultipleEnumeration’ inside a method and not just once, but twice.

Take a look at the code below.

private ReturnValuesForDateJson[] GetReturnValues(IEnumerable<ReferenceNumberAndReturnTypeRecordModel> recordModels)
  // ReSharper disable once PossibleMultipleEnumeration
  var dates = recordModels.First()
  	.ReturnRecords.Select(x => x.ReturnDate).Distinct();

  return dates.Select(aDate => new ReturnValuesForDateJson
      ReturnDate = new MonthJson(aDate.Year,aDate.Month),
      // ReSharper disable once PossibleMultipleEnumeration
      ReturnValues = GetReturnValueForSeriesIdentifier(aDate, recordModels)


Notice how the ReSharper warning for PossibleMultipleEnumerations has been disabled 2 times, this is because the method argument is an IEnumerable. If we change this IEnumerable to either IList or ICollection and the errors go away or we can leave the argument as IEnumerable and get the list by calling .ToList() on the Enumerable.

Now why is it an issue iterating over an enumerable multiple times? Because Enumerable collections are evaluated each time you go over them the underlying results could possibly change. Imagine that you pass in a LINQ statement into the method. The nature of LINQ would allow the results to be different each time, thus possibly causing bugs or errors.

Working code, no more static analysis errors

private ReturnValuesForDateJson[] GetReturnValues(ICollection<ReferenceNumberAndReturnTypeRecordModel> recordModels)
  var dates = recordModels.First()
  	.ReturnRecords.Select(x => x.ReturnDate).Distinct();

  return dates.Select(aDate => new ReturnValuesForDateJson
      ReturnDate = new MonthJson(aDate.Year,aDate.Month),

      ReturnValues = GetReturnValueForSeriesIdentifier(aDate, recordModels)


Remember, if the tool is telling you that your code is less than optimal give it a look and try to fix it. Now, there may be legitimate reasons to ignore the warnings, that is cool. But when this is the case do your friends a favor and add a comment regarding the intent so future developers understand the line of thinking.

Till next time,

Posted in Clean Code | Tagged | Leave a comment

Merge Headache — Don’t Re-Purposes a class, create a new one

Merging code does not have to be the frustrating process that many people experience, if done right. I have learned during my career that if I pull from my master branch daily, if not more often, my merges are almost always pain free. Now I am not saying that merging is always pain free. There will be times where significant or simultaneous changes to a file introduce pain. There will also be times when a codebase is undergoing structural changes that you will experience issues, but honestly if the changes to the code are standard I attest that merging code should not be too painful.

However, there are things developers can do which directly introduce pain and frustration into the merge process. One of these things is re-purposing a file with almost an entirely new code base. When I say re-purposing, I mean NOT changing the name, but changing 80% of the logic inside the file.

Imagine you have a file called Baz.cs and inside of this file you have a class named Baz. Now image you made changes to the Baz class, inside the Baz.cs file. Now, at the very same time you are making changes to Baz.cs someone else (in another branch mind you) is also making changes to Baz.cs. However, their changes not only change the contents of the Baz class, they rename the Baz class to Bar. While renaming Baz to Bar they also introduce signficnat changes to the body of the class.

The issue here is that when you attempt to do a merge or rebase your SCM system is going to flag almost every line with a conflict, but not all lines (depending on the changes of course). This will leave the person doing the merge to have to make a decision on what to do. Do you take the changes as is, are your changes still needed, do you need to create a new class/file so that you can have both set of changes? What should you do? How do you complete the merge without blowing the prior changes out of the water?

To avoid this type of merge headache what should you do? In my opinion, if you are going to change the intent of a class or the intent of a file you should create a NEW file and delete the old one. If this process had been followed in this scenario the Baz.cs file would have been deleted and the Bar.cs file would have taken its place. This would have allowed me to not deal w/ the merge issues if I knew the file was no longer needed or possibly undo the delete if I know my original changes are still needed.

Keeping the merge process pain free is not too hard, but it does require a bit of fore thought and planning.

Till next time,

Posted in Git, Uncategorized | Tagged | 3 Comments

CQRS recap, or “How to resuscitate”

I’m fighting a bit with my ego at the moment which is telling me I need to provide at least four paragraphs of update on what I’ve been doing the last three years when I last posted. The fight is with my more practical side which is saying, “Name three people that have noticed.” I’ll compromise with a bulleted list because some of it does have a little bearing on the rest of this post:

  • I’m no longer with BookedIN although they are still going strong.
  • I’ve started recently with Clear Measure who has graciously relaxed their “no hillbilly” hiring policy. Guardedly.
  • For those interested in my extra-curriculars, I’m also blogging at where you’ll find a recent follow-up to an older post on Life in the Bahamas, which I keep getting emails about for some reason…

This past weekend, Clear Measure hosted a meetup/coding-thingy on CQRS with your host, Gabriel Schenker. Initial intended as an event by and for Clear Measurians, it was opened to the public as a means to garner feedback for future events. It was a 7-hour affair where Gabriel set out the task to perform then left us to our devices to build as much as we could while he provided guidance and answered questions.

The event itself ran as well as I expected, which, me being an optimistic sort, was awesome! And Gabriel, if you’re reading, I did manage to get to the beach today so don’t feel bad about taking that time away from me. I won’t go into logistics but wanted to get my thoughts on CQRS on the table for posterity.

By way of background, my knowledge of CQRS was, up until I started at Clear Measure, pretty vague. Limited mostly to what I knew about the definition of each of the words in the acronym. Gabriel has, in meetings and recently in his blog, increased my awareness of it to some degree to the point where it made sense as an architectural pattern but was still abstract enough that if someone asked me to implement it in a project, I would have first consulted with a local voodoo doctor (i.e. speed dial #4).

The good part

So the major benefit I got from the event is how much CQRS was demystified. It really *is* just segregation of the responsibilities of the commands and the queries. Commands must logically be separated from queries to the point where they don’t even share the same domain model. Even the term “domain model” is misleading since the model for queries is just DTOs, and not even glorified ones at that.

Taking one example, we created ourselves a swanky new TaskService for saving a new task. It takes in a ScheduleTaskDto which contains the basics from the UI: a task name, a due date, some instructions, and a list of assignees. The TaskService uses that info to create a fully-formed Task domain object, setting not only the properties passed in but also maybe the CreateDate, the Status, and the ID. Then maybe it validates the thing, saves it to the repository, and notifies the assignees of the new task. All like a good, well-behaved domain object.

Now we want a list of tasks to show on the dashboard. Here are two things we actively had to *stop* ourselves from doing:

  • Returning a list of Task objects
  • Putting the logic to retrieve the tasks in the TaskService

Instead, we created DashboardTask DTO containing the TaskId, Name, DueDate, and Status. All the items needed to display a list of tasks and nothing else. We also created a view in the database that retrieves exactly that information. The code to retrieve that info was in a separate class that goes directly to the database, not through the TaskService.

Given more time, I can see how the command/query separation would play out more easily. For the commands, we may have used NHibernate which gives us all the lazy loading and relationship-handling and property mappings and everything else is does well. For the queries, probably stick with views and Dapper which allow us to query exactly the information we want.

My sense is that we’d have a lot bigger set of classes in the query model than in the command model (which would be a full-fledged domain). Because the query model requires at lease one class almost for each and every screen in the app. Dashboard listing of tasks for supervisors: SupervisorDashboardTask. List of tasks for a dropdown list: TaskListItem. Retrieve a task for printing on a report: OverdueTask. All separate and all very specific.

Wrap up

My partner-in-crime for the day was Alper Sunar who is hosting our day’s efforts, such as they are. The big hurdle I had to jump early on was to stop myself from going infrastructure crazy. Early discussions touched on: Bootstrap, RavenDB, IoC, and Angular, all of which would have kept me from my goal: learning CQRS.

I’ve forked the code with the intent of continuing the journey and perhaps looking into something like RavenDB. I have to admit, all the talk around the virtual water cooler about elastic search has me thinking. And not just about finding new sister-wives…

Kyle the Returned

Posted in BookedIN, Clear Measure, CQRS | Tagged , , , | 2 Comments


This week I had the pleasure of getting to attend the DotNetFringe conference. This was hosted in Portland, OR (a place I had never been) and it was an outstanding conference. Thank you to all of the organizers for putting on such a great event.


I have been a long time fan of OSS and I’m always looking to learn about new ways of doing things. One of those things that I have been thinking about lately is what are the next big changes in software development, specifically for .Net usually. What is the next IoC container for .Net applications? After watching a talk on Reactive Extensions by Amy I continue to be convinced that those who really understand functional programming and state management will be making the systems that I want to work on in the future.

I’ve been playing in Clojure for the last year or so and it has really changed the way I see software. Early this year I started porting all of my build scripts to FAKE (F#) and I can see how powerful F# is and the beauty it brings. I’m also watching with rapt attention the developments on the client side in terms of react.js and the various ClojureScript projects that take this model to the next level. These projects seem to really take something that is normally quite messy and make it simpler. My ability to reason about my F# and Clojure projects is an order of magnitude better than my OO projects from 3 years ago (because my current OO projects have larger doses of a functional aspect to them).

I think that this all dovetails nicely with the discussion around different forms of state management on disk as well. Looking at tools outside of the traditional RDBMS and looking at Graphs and Document databases.

State brings with it a lot of difficulty, its one of the reasons why I like working on Websites more than Native applications. But with these new tools coming about, maybe I should give it a second look.

Thanks again to the entire team running DotNetFringe!

Posted in Uncategorized | 2 Comments

Using nHibernate EventListeners to validate and audit data

In an application you would like to have maximum control over interaction with the database. In the ideal situation there is one single point where all data can be checked and monitored before it is sent to, or read from the database. In nHibernate EventListeners are an ideal way to do that. Every entity read from or sent to the db, whether explicitly flushed or part of a graph, passes there. nHibernate has a long list of Events you can listen to. At first sight the documentation on picking the right listeners and how to implement them points to an article by Ayende. Alas there are some severe issues taking that direction. There is a better way.

The problem

Entities in our domain can have quite complex validation. My base DomainObject class has a string-list of possible Issues, and a Validate method. An object with an empty issue list is considered valid. What makes an object invalid is described in the issue-list. Implementing this validation in the nHibernate OnPreUpdate event listener would seem a solid way to trap all validation errors.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     domainObject.Validate();
  7.     if (domainObject.Issues.Any())
  8.         throw new InvalidDomainObjectException(domainObject);
  9.     return false;
  10. }


Pretty straigthforward. The Validate method performs the validation. In case this results in any issues an exception is thrown and the update is canceled. But there is a huge problem with this approach. As the validation code can, and will, do almost anything there is a chance it will touch a lazy collection. Resulting in an nHibernate exception. It is performing a flush, the lazy collection will trigger a read. This results in the dreaded “Collection was not processed in flush” exception.

The way out

There are loads and loads of events your code can listen to. The OnFlushEntity event is fired before the OnPreUpdate event. It fires at the right occasion and the best part is that you can touch anything while inside.

Code Snippet
  1. public void OnFlushEntity(FlushEntityEvent flushEntityEvent)
  2. {
  3.     if (HasDirtyProperties(flushEntityEvent))
  4.     {
  5.         var domainObject = flushEntityEvent.Entity as DomainObject;
  6.         if (domainObject != null)
  7.             domainObject.Validate();
  8.         }
  9.     }
  10. }


The validation is only performed, I leave the rejection of invalid entities in the OnPreUpdate event.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     if (domainObject.Issues.Any())
  7.         throw new InvalidDomainObjectException(domainObject);
  8.     return false;
  9. }


Crucial in the OnFlushEntity event is the HasDirtyProperties method. This method was found here, in a GitHub contribution by Filip Kinsky, just about the only documentation on the event.

Code Snippet
  1. private bool HasDirtyProperties(FlushEntityEvent flushEntityEvent)
  2. {
  3.     ISessionImplementor session = flushEntityEvent.Session;
  4.     EntityEntry entry = flushEntityEvent.EntityEntry;
  5.     var entity = flushEntityEvent.Entity;
  6.     if (!entry.RequiresDirtyCheck(entity) || !entry.ExistsInDatabase || entry.LoadedState == null)
  7.     {
  8.         return false;
  9.     }
  10.     IEntityPersister persister = entry.Persister;
  12.     object[] currentState = persister.GetPropertyValues(entity, session.EntityMode);
  13.     object[] loadedState = entry.LoadedState;
  15.     return persister.EntityMetamodel.Properties
  16.         .Where((property, i) => !LazyPropertyInitializer.UnfetchedProperty.Equals(currentState[i]) && property.Type.IsDirty(loadedState[i], currentState[i], session))
  17.         .Any();
  18. }


As stated, you can do almost anything in the OnFlushEntity event, modifying data in entities included. So this is an ideal place to set auditing properties, or even add items to collections of the entity. All these modifications will be persisted to the database.

The original post by Ayende was on setting auditing properties, not about validation. Modifying an entity inside the OnPreUpdate event can be done, but takes some fiddling. Having discovered OnFlushEntity we moved not only the validation but also the auditing code here.

Setting eventhandlers

So far I have described the event handlers but have not shown yet how to hook them up. The eventhandlers are defined in interfaces, to be implemented in a class. The snippets above are all members of one class. 

Code Snippet
  1. public class RechtenValidatieEnLogListener : IPreUpdateEventListener, IPreInsertEventListener, IPreDeleteEventListener, IPostLoadEventListener, IFlushEntityEventListener
  2. {
  4.   // ……
  6. }


Used  when creating the sessionfactory.

Code Snippet
  1. private static ISessionFactory GetFactory()
  2. {
  3.     var listener = new RechtenValidatieEnLogListener();
  4.     return Fluently.Configure().
  5.         Database(CurrentConfiguration).
  6.         Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())).
  7.         ExposeConfiguration(c => listener.Register(c)).
  8.         CurrentSessionContext<HybridWebSessionContext>().
  9.         BuildSessionFactory();
  10. }


Registering the handlers is done by the Register method, which uses the configuration

Code Snippet
  1. public void Register(Configuration cfg)
  2. {
  3.      cfg.EventListeners.FlushEntityEventListeners = new[] { this }
  4.         .Concat(cfg.EventListeners.FlushEntityEventListeners)
  5.         .ToArray();
  6.      cfg.EventListeners.PreUpdateEventListeners = new[] { this }
  7.         .Concat(cfg.EventListeners.PreUpdateEventListeners)
  8.         .ToArray();
  9.  }


Most examples on hooking in handlers use the cfg.SetListener method. The problem with that is that it knocks out any handlers already hooked in. For the OnPreUpdate event that’s no problem, but knocking out the default OnFlushEntity event is fatal. Using this code your custom listener will be combined with any handlers already set.

Winding down

That’s all there is to it. I have left out the parts on validating reads, inserts or deletes. They follow the same pattern, up to you to implement them. EventListeners are very powerful, but it is a pity the documentation is so sparse. All of this was found by scraping the web and a lot of trial and error. But now we have a very solid system for validation and auditing. Without any limitations.

Posted in Uncategorized | 6 Comments