Using nHibernate EventListeners to validate and audit data

In an application you would like to have maximum control over interaction with the database. In the ideal situation there is one single point where all data can be checked and monitored before it is sent to, or read from the database. In nHibernate EventListeners are an ideal way to do that. Every entity read from or sent to the db, whether explicitly flushed or part of a graph, passes there. nHibernate has a long list of Events you can listen to. At first sight the documentation on picking the right listeners and how to implement them points to an article by Ayende. Alas there are some severe issues taking that direction. There is a better way.

The problem

Entities in our domain can have quite complex validation. My base DomainObject class has a string-list of possible Issues, and a Validate method. An object with an empty issue list is considered valid. What makes an object invalid is described in the issue-list. Implementing this validation in the nHibernate OnPreUpdate event listener would seem a solid way to trap all validation errors.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     domainObject.Validate();
  7.     if (domainObject.Issues.Any())
  8.         throw new InvalidDomainObjectException(domainObject);
  9.     return false;
  10. }


Pretty straigthforward. The Validate method performs the validation. In case this results in any issues an exception is thrown and the update is canceled. But there is a huge problem with this approach. As the validation code can, and will, do almost anything there is a chance it will touch a lazy collection. Resulting in an nHibernate exception. It is performing a flush, the lazy collection will trigger a read. This results in the dreaded “Collection was not processed in flush” exception.

The way out

There are loads and loads of events your code can listen to. The OnFlushEntity event is fired before the OnPreUpdate event. It fires at the right occasion and the best part is that you can touch anything while inside.

Code Snippet
  1. public void OnFlushEntity(FlushEntityEvent flushEntityEvent)
  2. {
  3.     if (HasDirtyProperties(flushEntityEvent))
  4.     {
  5.         var domainObject = flushEntityEvent.Entity as DomainObject;
  6.         if (domainObject != null)
  7.             domainObject.Validate();
  8.         }
  9.     }
  10. }


The validation is only performed, I leave the rejection of invalid entities in the OnPreUpdate event.

Code Snippet
  1. public bool OnPreUpdate(PreUpdateEvent preUpdateEvent)
  2. {
  3.     var domainObject = preUpdateEvent.Entity as DomainObject;
  4.     if (domainObject == null)
  5.         return false;
  6.     if (domainObject.Issues.Any())
  7.         throw new InvalidDomainObjectException(domainObject);
  8.     return false;
  9. }


Crucial in the OnFlushEntity event is the HasDirtyProperties method. This method was found here, in a GitHub contribution by Filip Kinsky, just about the only documentation on the event.

Code Snippet
  1. private bool HasDirtyProperties(FlushEntityEvent flushEntityEvent)
  2. {
  3.     ISessionImplementor session = flushEntityEvent.Session;
  4.     EntityEntry entry = flushEntityEvent.EntityEntry;
  5.     var entity = flushEntityEvent.Entity;
  6.     if (!entry.RequiresDirtyCheck(entity) || !entry.ExistsInDatabase || entry.LoadedState == null)
  7.     {
  8.         return false;
  9.     }
  10.     IEntityPersister persister = entry.Persister;
  12.     object[] currentState = persister.GetPropertyValues(entity, session.EntityMode);
  13.     object[] loadedState = entry.LoadedState;
  15.     return persister.EntityMetamodel.Properties
  16.         .Where((property, i) => !LazyPropertyInitializer.UnfetchedProperty.Equals(currentState[i]) && property.Type.IsDirty(loadedState[i], currentState[i], session))
  17.         .Any();
  18. }


As stated, you can do almost anything in the OnFlushEntity event, modifying data in entities included. So this is an ideal place to set auditing properties, or even add items to collections of the entity. All these modifications will be persisted to the database.

The original post by Ayende was on setting auditing properties, not about validation. Modifying an entity inside the OnPreUpdate event can be done, but takes some fiddling. Having discovered OnFlushEntity we moved not only the validation but also the auditing code here.

Setting eventhandlers

So far I have described the event handlers but have not shown yet how to hook them up. The eventhandlers are defined in interfaces, to be implemented in a class. The snippets above are all members of one class. 

Code Snippet
  1. public class RechtenValidatieEnLogListener : IPreUpdateEventListener, IPreInsertEventListener, IPreDeleteEventListener, IPostLoadEventListener, IFlushEntityEventListener
  2. {
  4.   // ……
  6. }


Used  when creating the sessionfactory.

Code Snippet
  1. private static ISessionFactory GetFactory()
  2. {
  3.     var listener = new RechtenValidatieEnLogListener();
  4.     return Fluently.Configure().
  5.         Database(CurrentConfiguration).
  6.         Mappings(m => m.FluentMappings.AddFromAssembly(Assembly.GetExecutingAssembly())).
  7.         ExposeConfiguration(c => listener.Register(c)).
  8.         CurrentSessionContext<HybridWebSessionContext>().
  9.         BuildSessionFactory();
  10. }


Registering the handlers is done by the Register method, which uses the configuration

Code Snippet
  1. public void Register(Configuration cfg)
  2. {
  3.      cfg.EventListeners.FlushEntityEventListeners = new[] { this }
  4.         .Concat(cfg.EventListeners.FlushEntityEventListeners)
  5.         .ToArray();
  6.      cfg.EventListeners.PreUpdateEventListeners = new[] { this }
  7.         .Concat(cfg.EventListeners.PreUpdateEventListeners)
  8.         .ToArray();
  9.  }


Most examples on hooking in handlers use the cfg.SetListener method. The problem with that is that it knocks out any handlers already hooked in. For the OnPreUpdate event that’s no problem, but knocking out the default OnFlushEntity event is fatal. Using this code your custom listener will be combined with any handlers already set.

Winding down

That’s all there is to it. I have left out the parts on validating reads, inserts or deletes. They follow the same pattern, up to you to implement them. EventListeners are very powerful, but it is a pity the documentation is so sparse. All of this was found by scraping the web and a lot of trial and error. But now we have a very solid system for validation and auditing. Without any limitations.

Posted in Uncategorized | Leave a comment

.NET Fringe, defining the future


There’s a long history in OSS communities of projects emerging from the community that start on the fringes. The people working on these projects are motivated by a desire for change, for doing things in a way that is not necessarily the norm. The fact that they exist is a sign of a healthy community.

As a very salient example, take jQuery. At one point it was a small JavaScript library that a few passionate developers worked on, it then grew to be the de facto library for developers everywhere. Thanks to a rich ecosystem around consumption and contribution to Open Source, what was once fringe become the mainstream.

Until recently, this ecosystem has not really existed in the .NET world. However, times they are a changin! In the past 5 to 10 years there have been major positive changes. One really important one was the ground work laid by ALT.NET. It made a very loud call for many of the changes we’re seeing. Another one is the bold steps Microsoft has taken to level the playing field for OSS libraries and tools. Regardless, the important point is this ecosystem I described is here in .NET and growing.

There’s too many examples to name all but I’ll list a few recent projects that illustrate this : jQuery, NuGetGithub, JSON.NET, AutomapperXamarinNancyFX, and .NET vNext. This is just a sampling that does not do justice as there are many many other examples.

This change is important. This is just the beginning, but it’s a great beginning. A group of us think this is so important, that we’re putting together an event focused on this topic, .NET Fringe.

We’re bringing together members of the .NET OSS community that have been working to define the future. They are going to share their works, share their learnings, and share their passion. And it’s happening in Portland, a place rich in OSS culture.

Be part of something amazing, come to .NET Fringe!

Posted in .net, oss | 10 Comments

A short note on build tasks

Surely, I’m behind the curve here but I’ve been thinking about the typical build process here at work. For a long time I’ve been operating off of the classic model from back in my NAnt days where it was all about Build -> Unit Test -> Integration Test. Maybe if you are feeling fancy you have some different steps for running database migrations or running other code analysis tools.

However, I’ve been working a lot with the concept of microservices in my side project. Trying to figure out where they break down and where they shine. Also, just trying to figure out the practical issues with running them. There is so much going on in our domain right now that trying to keep up can seem like a tidal wave. So if you are new to some of these concepts, well – so am I. :) The following is a brain dump of ideas for managing all of this crazy.


First, I’ve started adding a few new steps to my build process. First off, is a reminder of how important a ‘packaging’ step is. Rob and I built this into UppercuT but its really coming back to me how important this is. I think its critical to realize that your default build output may not be enough packaging. In Visual Studio (well for me anyways) there is a very common behavior to just grab the ‘bin’ output and run. Because I need to package up my source better, I am now running my build step, then moving/copying all of the deployable content to something like a ‘build_output’ folder.

For those of you doing classic Visual Studio / C# / .Net development, I strongly recommend that you break out of your IDE for this. I would invite you to look at the power contained in your command line tools, even CMD can be used with great good. From there look at PowerShell. For me its bash, but I really need to look at ZSH.

Now that everything is in the ‘build_output’, my next step has been to run various HTML/ASPX/CSS/JS minification programs (and the litany of lilliputian tools arrive) to compress and optimize my application. Next step has been to then package all of this build_output content into a deployable unit. For my .net apps this is a NuGet via OctoPack and for my side project it has been Debian files.


This leads me to my newest build task, push. Push for me takes all of my nice new build output and makes it available to the larger audience. Note that this step could be run by me or by my automated build tooling, but what it does is simply take the assets and for my .Net projects throws them in Octo’s NuGet repo or for my side project uses fpm/deb-s3 to generate my own Debian repository so that I can then pull these assets down and deploy them in testing / staging / production and have a consistent experience.

So, nothing really earth shattering here, but I wanted to share my thoughts. Also, I find these kinds of topics hard to find out there in the interwebs. If you have any great articles around the topic please do share in the comments.

Posted in Uncategorized | 1 Comment

Changing roles and focus – but not good bye

I’ve been presented with a great opportunity to work on the NeuronESB Team ( I’ve always wanted to work on a product team – which is a big change from my work over the past 23+ years where my focus was on custom application development and related consulting. I’ve been on the Neuron team now since 1/5 and it has been a great experience.

So…what does this mean as far as my community activities? Candidly, I’ll be cutting that back quite a bit. I will try to get to 1 or 2 shows this year. If you follow me on Twitter, you began to see my travel schedule pick up. We’re not through February yet and I’ve already racked up 26K miles already. Given that trend, it doesn’t leave time for conferences – which is difficult because it means I won’t get to see good friends throughout the year. This is also a good thing as I’ve been speaking at shows, non-stop – for 6 years. Even without this job change, I was likely to cut back on the shows. It’s time for others to step up and share their knowledge with the community.

I’ll still continue to write for CODE Magazine and I will still continue to produce videos for WintellectNOW. I’ll do my best to support the Code Camps near me (Philly, NYC, Central Penn). With all of this travel, I may be in town for a user group and If possible and if you would like to have me, I’d love to speak at your group. I’ll post on Twitter my travel plans as they become known. Most of my contributions are going to be in the legal arena – specifically on open source and intellectual property as they relate to technologists, contracts, etc.

Posted in Uncategorized | 2 Comments

Typescript Support in Atom Editor for Windows

Recently I was trying to get TypeScript support working inside the Atom editor on windows.

In my attempt to get things working I went to the Atom site and found the TypeScript package. Per the documentation I did ‘apm install typescript’. After about 15 seconds it appeared that I was good to go. Sadly this was not the case. When I opened Atom (by typing in atom on the cmd prompt) I would receive this error.

Because I like to follow directions I restarted Atom (again via the CMD prompt). Sadly I received the same error again… WTF.

Well a quick google search for ‘These are now installed. Best you restart atom just this once.’ yielded one result. However, when I clicked on the link I was taken to the github 404 page, seems that link is dead. What to do now? Lucky for me there was a cached version of the page I could look at (thank you google).

Looking through the source file I was able to find the block of code which was throwing this message (seen below)

It appears that both linter and autocomplete-plus are required in order for TypeScript support to work. I assumed these would have been installed by default, but guess not.

I thought I would simply try to install these Atom packages in hopes the error would go away. To accomplish this I ran the following 2 commands

  • apm install linter
  • apm install autocomplete-plus

Once I had both of these packages installed I tried to reopen Atom. To my excitement the TypeScript message was no long present. To ensure my fix worked I decided to edit a .ts file and yup, my stuff recompiled down to js…

Hope this helps,

Till next time,

Posted in Uncategorized | Leave a comment