Sharks!

So, in my area of the US, we’ve had somewhat of an unusual summer when it comes to shark attacks.  It seems like every other day, there’s an attack at a local beach, and in fact, yesterday they were falling from the sky here in Virginia Beach – I kid you not.

So, what if there was an app where you could report, and confirm reports of shark sightings and other beach conditions – good or bad.  Wouldn’t this make going to the beach just a tiny tiny bit safer?  Or asked another way.. If I saw a big bull shark 400 yards away from where your kids were swimming, wouldn’t it be neat if I could let you know somehow?

Enter http://shark.report

sharkreport

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In a nutshell, it’s Waze for the beach.

Screenshot 2015-07-02 11.46.19

 

 

 

 

 

 

 

 

 

 

 

 

 

Here’s where you come in.

I did this as a project for the local Code for America brigade, Code for Hampton Roads.  It’s being hosted by my good friends over at Hatch, a local tech accelerator here.  The app is a fork of the popular mbta.ninja app that was made by another Code for America Brigade – Code for Boston.

Anyhow, we need people to work on this!  People are getting bitten!  Summer’s half over! Shark week’s almost here! :)

If anyone has any interest, all you have to do is work on the app and I’ll update with any pull request that I get.

Things we need :

  • More beaches!
  • Better beach configuration
  • Updates to the latest mbta.ninja source
  • Twilio integration!

 

Posted in Uncategorized | Leave a comment

On UI Testing

I’m part of a secret society of developers that has evolved over the years into something that has had a pretty significant impact on my career. We ask each other advice, hang out at conferences, discuss trends in the field, etc, etc, and so on and so forth. Pretention is low, content and entertainment value is high. It’s essentially everything I had hoped alt.NET would have been.

A short while ago, we had a chat. It was the latest in a series, depending on how you define “series”, where we gather together to discuss some topic, be it JavaScript frameworks, OO practices, or smoked meat. On this particular day, it was UI testing.

I don’t recall all the participants but it was a good number of the people on this list. Here, I’m going to attempt to summarize the salient points but given my memory, it’ll more likely be a dissertation of my own thoughts. Which is just as well as I recall doing more talking than I should have.

Should you UI test?

This was a common thread throughout. Anyone who has done a significant amount of UI testing has asked a variant of this question. Usually in the form, “Why the &*%$ am I doing this?”

Let it not be said that UI testing is a “set it and forget it” affair. Computers are finicky things, UI’s seemingly more so. Sometimes things can take just that one extra second to render and all of a sudden your test starts acting out a Woody Allen scene: Where’s the button? There’s supposed to be a button. YOU TOLD ME THERE WOULD BE A BUTTON!!!

Eventually, we more or less agreed that they are probably worth the pain. From my own experience, working on a small team with no QA department, they saved us on several occasions. Yes, there are the obvious cases where they catch a potential bug. But there was also a time when we had to re-write a large section of functionality with no change to the UI. I felt really good about having the tests then.

One counter-argument was whether you could just have a comprehensive suite of integration tests. But there’s something to be said for having a test that:

  1. Searches for a product
  2. Adds it to the shopping cart
  3. Browses more products
  4. Checks out
  5. Goes to PayPal and pays
  6. Verifies that you got an email

This kind of integration test is hard to do, especially when you want to verify all the little UI things in between, like whether a success message showed up or whether the number of items in the shopping cart incremented by 1.

We also had the opposite debate: If you have a comprehensive suite of UI tests and are practicing BDD, do you still need TDD and unit tests? That was an interesting side discussion that warrants a separate post.

Maintenance

…is ongoing. There’s no getting around that. No matter how bullet-proof you make your tests, the real world will always get in the way. Especially if you integrate with third-party services (<cough>PayPal<cough>). If you plan to introduce UI tests, know that your tests will be needy at times. They’ll fail for reasons unknown for several consecutive runs, then mysteriously pass again. They’ll fail only at certain times of the day, when Daylight Savings Time kicks in, or only on days when Taylor Swift is playing an outdoor venue in the western hemisphere. There will be no rhyme or reason to the failures and you will never, ever be able to reproduce them locally.

You’ll add sleep calls out of frustration and check in with only a vague hope that it will work. Your pull requests will be riddled with variations of “I swear I wouldn’t normally do this” and “I HAVE NO IDEA WHAT’S GOING ON”. You’ll replace elegant CSS selectors with XPath so grotesque that Alan Turing will rise from his grave only to have his rotting eyeballs burst into flames at the sight of it.

This doesn’t really jibe with the “probably worth it” statement earlier. It depends on how often you have to revisit them and how much effort goes into it. From my experience, early on the answer is: often and a lot. As you learn the tricks, it dwindles significantly.

One of those tricks is the PageObject pattern. There was universal agreement that it is required when dealing with UI tests. I’ll admit I hadn’t heard of the pattern before the discussion but at the risk of sounding condescending, it sounds more like common sense than an actual pattern. It’s something that, even if you don’t implement it right away, you’ll move toward naturally as you work with your UI tests.

Data setup

…is hard, too. At least in the .NET world. Tools like Tarantino can help by creating scripts to prime and tear down a database. You can also create an endpoint (on a web app) that will clear and reset your database with known data.

The issue with these approaches is that the “known” data has to actually be known when you’re writing your tests. If you change anything in it, Odin knows what ramifications that will have.

You can mitigate this a little depending on your technology. If you use SpecFlow, then you may have direct access to the code necessary to prime your database. Otherwise, maybe you can create a utility or API endpoints that allow you to populate your data in a more transparent manner. This is the sort of thing that a ReST endpoint can probably do pretty well.

Mobile

Consensus for UI testing on mobile devices is that it sucks more than that time after the family dinner when our cousin, Toothless Maggie, cornered—…umm… we’ll leave it at: it’s pretty bad…

We would love to be proven wrong but to our collective knowledge, there are no decent ways to test a mobile UI in an automated fashion. From what I gather, ain’t no picnic doing it in a manual fashion. Emulators are laughably bad. And there are more than a few different types and versions of mobile device so you have to use these laughably bad options about a dozen different ways.

Outsourcing

What about companies that will run through all your test scripts on multiple browsers and multiple devices? You could save some development pain that way. But I personally wouldn’t feel comfortable unless the test scripts were extremely prescriptive. And if you’re going to that length, you could argue that it’s not a large effort to take those prescriptive steps and automate them.

That said, you might get some quick bang for your buck going this route. I’ve talked to a couple of them and they are always eager to help you. Some of them will even record their test sessions which I would consider a must-have if you decide to use a company for this.

Tooling

I ain’t gonna lie. I like Cucumber and Capybara. I’ve tried SpecFlow and it’s probably as good as you can get in C#, which is decent enough. But it’s hard to beat fill_in ‘Email’, :with => ‘hill@billy.edu’ for conciseness and readability. That said, do not underestimate the effort it takes to introduce Ruby to a .NET shop. There is a certain discipline required to maintain your tests and if everyone is scared to dive into your rakefile, you’re already mixing stripes with plaid.

We also discussed Canopy and there was a general appreciation for how it looks though Amir is the only one who has actually used it. Seems to balance the readability of Capybara with the “it’s still .NET” aspect of companies that fear anything non-Microsoft. It’ll be high on my list of things to try the next time I’m given the option.

Of course, there’s Selenium both the IDE and the driver. We mentioned it mostly because you’re supposed to.

Some version of Visual Studio also provided support for UI tests, both recorded and coded. The CodedUI tests are supposed to have a pretty nice fluent interface and we generally agreed that coded tests are the way to go instead of recorded ones (as if that were ever in doubt).

Ed. note: Shout out to Protractor as well. We didn’t discuss it directly but as Dave Paquette pointed out later, it helps avoid random Sleep calls in your tests because it knows how to wait until binding is done. Downside is that it’s specific to Angular.

Also: jasmine and PhantomJS got passing mentions, both favorable.

Continuous Integration

This is about as close as we got to disagreement. There was a claim that UI tests shouldn’t be included in CI due to the length of time it takes to run them. Or if they are included, run them on a schedule (i.e. once a night) rather than on every “check in” (by which we mean, every feature).

To me, this is a question of money. If you have a single server and a single build agent, this is probably a valid argument. But if you want to get full value from your UI tests, get a second agent (preferably more) and run only the UI tests on it. If it’s not interfering with your main build, it can run as often as you like. Yes, you may not get the feedback right away but you get it sooner than if you run the UI tests on a schedule.


The main takeaway we drew from the discussion, which you may have gleaned from this summary, is: damn, we should have recorded this. That’s a mistake we hope to rectify for future discussions.

Posted in UI Testing | 12 Comments

Fix your code, don’t disable static analysis

Maybe it is my OCD, maybe it is that I would like to think I try to always write clean code, maybe it is something else entirely. But I always cringe when I see people turn off or disable static analysis in their code.

The reason I cringe is because I have to assume that the authors of the static analysis tools (be it ReSharper or Visual Studio or another product) are more knowledgeable in these areas than I am and they better understand why it is bad to so something.

Today I came across this ‘// ReSharper disable once PossibleMultipleEnumeration’ inside a method and not just once, but twice.

Take a look at the code below.

private ReturnValuesForDateJson[] GetReturnValues(IEnumerable<ReferenceNumberAndReturnTypeRecordModel> recordModels)
{
  // ReSharper disable once PossibleMultipleEnumeration
  var dates = recordModels.First()
  	.GetCalculationResult(CalculationProjectionValueType.UnannualizedRange)
  	.ReturnRecords.Select(x => x.ReturnDate).Distinct();

  return dates.Select(aDate => new ReturnValuesForDateJson
  {
      ReturnDate = new MonthJson(aDate.Year,aDate.Month),
      // ReSharper disable once PossibleMultipleEnumeration
      ReturnValues = GetReturnValueForSeriesIdentifier(aDate, recordModels)
  }).ToArray();

}



Notice how the ReSharper warning for PossibleMultipleEnumerations has been disabled 2 times, this is because the method argument is an IEnumerable. If we change this IEnumerable to either IList or ICollection and the errors go away or we can leave the argument as IEnumerable and get the list by calling .ToList() on the Enumerable.

Now why is it an issue iterating over an enumerable multiple times? Because Enumerable collections are evaluated each time you go over them the underlying results could possibly change. Imagine that you pass in a LINQ statement into the method. The nature of LINQ would allow the results to be different each time, thus possibly causing bugs or errors.

Working code, no more static analysis errors

private ReturnValuesForDateJson[] GetReturnValues(ICollection<ReferenceNumberAndReturnTypeRecordModel> recordModels)
{
  var dates = recordModels.First()
  	.GetCalculationResult(CalculationProjectionValueType.UnannualizedRange)
  	.ReturnRecords.Select(x => x.ReturnDate).Distinct();

  return dates.Select(aDate => new ReturnValuesForDateJson
  {
      ReturnDate = new MonthJson(aDate.Year,aDate.Month),

      ReturnValues = GetReturnValueForSeriesIdentifier(aDate, recordModels)
  }).ToArray();

}



Remember, if the tool is telling you that your code is less than optimal give it a look and try to fix it. Now, there may be legitimate reasons to ignore the warnings, that is cool. But when this is the case do your friends a favor and add a comment regarding the intent so future developers understand the line of thinking.

Till next time,

Posted in Clean Code | Tagged | Leave a comment

Merge Headache — Don’t Re-Purposes a class, create a new one

Merging code does not have to be the frustrating process that many people experience, if done right. I have learned during my career that if I pull from my master branch daily, if not more often, my merges are almost always pain free. Now I am not saying that merging is always pain free. There will be times where significant or simultaneous changes to a file introduce pain. There will also be times when a codebase is undergoing structural changes that you will experience issues, but honestly if the changes to the code are standard I attest that merging code should not be too painful.

However, there are things developers can do which directly introduce pain and frustration into the merge process. One of these things is re-purposing a file with almost an entirely new code base. When I say re-purposing, I mean NOT changing the name, but changing 80% of the logic inside the file.

Imagine you have a file called Baz.cs and inside of this file you have a class named Baz. Now image you made changes to the Baz class, inside the Baz.cs file. Now, at the very same time you are making changes to Baz.cs someone else (in another branch mind you) is also making changes to Baz.cs. However, their changes not only change the contents of the Baz class, they rename the Baz class to Bar. While renaming Baz to Bar they also introduce signficnat changes to the body of the class.

The issue here is that when you attempt to do a merge or rebase your SCM system is going to flag almost every line with a conflict, but not all lines (depending on the changes of course). This will leave the person doing the merge to have to make a decision on what to do. Do you take the changes as is, are your changes still needed, do you need to create a new class/file so that you can have both set of changes? What should you do? How do you complete the merge without blowing the prior changes out of the water?

To avoid this type of merge headache what should you do? In my opinion, if you are going to change the intent of a class or the intent of a file you should create a NEW file and delete the old one. If this process had been followed in this scenario the Baz.cs file would have been deleted and the Bar.cs file would have taken its place. This would have allowed me to not deal w/ the merge issues if I knew the file was no longer needed or possibly undo the delete if I know my original changes are still needed.

Keeping the merge process pain free is not too hard, but it does require a bit of fore thought and planning.

Till next time,

Posted in Git, Uncategorized | Tagged | 3 Comments

CQRS recap, or “How to resuscitate”

I’m fighting a bit with my ego at the moment which is telling me I need to provide at least four paragraphs of update on what I’ve been doing the last three years when I last posted. The fight is with my more practical side which is saying, “Name three people that have noticed.” I’ll compromise with a bulleted list because some of it does have a little bearing on the rest of this post:

  • I’m no longer with BookedIN although they are still going strong.
  • I’ve started recently with Clear Measure who has graciously relaxed their “no hillbilly” hiring policy. Guardedly.
  • For those interested in my extra-curriculars, I’m also blogging at http://kyle.baley.org where you’ll find a recent follow-up to an older post on Life in the Bahamas, which I keep getting emails about for some reason…

This past weekend, Clear Measure hosted a meetup/coding-thingy on CQRS with your host, Gabriel Schenker. Initial intended as an event by and for Clear Measurians, it was opened to the public as a means to garner feedback for future events. It was a 7-hour affair where Gabriel set out the task to perform then left us to our devices to build as much as we could while he provided guidance and answered questions.

The event itself ran as well as I expected, which, me being an optimistic sort, was awesome! And Gabriel, if you’re reading, I did manage to get to the beach today so don’t feel bad about taking that time away from me. I won’t go into logistics but wanted to get my thoughts on CQRS on the table for posterity.

By way of background, my knowledge of CQRS was, up until I started at Clear Measure, pretty vague. Limited mostly to what I knew about the definition of each of the words in the acronym. Gabriel has, in meetings and recently in his blog, increased my awareness of it to some degree to the point where it made sense as an architectural pattern but was still abstract enough that if someone asked me to implement it in a project, I would have first consulted with a local voodoo doctor (i.e. speed dial #4).

The good part

So the major benefit I got from the event is how much CQRS was demystified. It really *is* just segregation of the responsibilities of the commands and the queries. Commands must logically be separated from queries to the point where they don’t even share the same domain model. Even the term “domain model” is misleading since the model for queries is just DTOs, and not even glorified ones at that.

Taking one example, we created ourselves a swanky new TaskService for saving a new task. It takes in a ScheduleTaskDto which contains the basics from the UI: a task name, a due date, some instructions, and a list of assignees. The TaskService uses that info to create a fully-formed Task domain object, setting not only the properties passed in but also maybe the CreateDate, the Status, and the ID. Then maybe it validates the thing, saves it to the repository, and notifies the assignees of the new task. All like a good, well-behaved domain object.

Now we want a list of tasks to show on the dashboard. Here are two things we actively had to *stop* ourselves from doing:

  • Returning a list of Task objects
  • Putting the logic to retrieve the tasks in the TaskService

Instead, we created DashboardTask DTO containing the TaskId, Name, DueDate, and Status. All the items needed to display a list of tasks and nothing else. We also created a view in the database that retrieves exactly that information. The code to retrieve that info was in a separate class that goes directly to the database, not through the TaskService.

Given more time, I can see how the command/query separation would play out more easily. For the commands, we may have used NHibernate which gives us all the lazy loading and relationship-handling and property mappings and everything else is does well. For the queries, probably stick with views and Dapper which allow us to query exactly the information we want.

My sense is that we’d have a lot bigger set of classes in the query model than in the command model (which would be a full-fledged domain). Because the query model requires at lease one class almost for each and every screen in the app. Dashboard listing of tasks for supervisors: SupervisorDashboardTask. List of tasks for a dropdown list: TaskListItem. Retrieve a task for printing on a report: OverdueTask. All separate and all very specific.

Wrap up

My partner-in-crime for the day was Alper Sunar who is hosting our day’s efforts, such as they are. The big hurdle I had to jump early on was to stop myself from going infrastructure crazy. Early discussions touched on: Bootstrap, RavenDB, IoC, and Angular, all of which would have kept me from my goal: learning CQRS.

I’ve forked the code with the intent of continuing the journey and perhaps looking into something like RavenDB. I have to admit, all the talk around the virtual water cooler about elastic search has me thinking. And not just about finding new sister-wives…

Kyle the Returned

Posted in BookedIN, Clear Measure, CQRS | Tagged , , , | 2 Comments