Missing NuGet Packages and Visual Studio

Today I was trying to update my solution to use the newest version of some of the NuGet packages, I had a few items a bit out of date. I did the update via the UI in Visual Studio, everything appeared to be working as expected, until I go to build. The build failed w/ the message below.

MissingNuGet

If you look at my packages.config file (below) you will notice I am only referencing 2 packages, MVVM Light and NewtonSoft.Json
MyPackages

What is really odd is I have the NuGet package restore turned on, why am I getting this message? I wanted to ensure that my packages were in fact being downloaded so I went to my packages folder to check… and yup they were in fact downloaded.

PackagesHighlighted

Now I was confused. I did what every good developer would do. I deleted my packages folder then I did a clean in VS, then I did a rebuild… I still received the same message :(

Next I decided I wanted to see what my .csproj file was telling me so I opened this up and did a search for packages. Because clearly the project was looking for something that I did not know about. When I did this I found something that looked fishy. The WRONG version of the Microsoft.BCL.Builds package was being referenced, but it was NOT in my packages.config… WTF. (other projects in my solution were referencing the correct version of the BCL package)

ProjectFile_01

I decided that I would simply do install the BCL package I wanted and the world would be happy… NOPE

Now when I look at my .csproj file I have 2 entries for the BCL.Builds package. The old one AND the new one, but my packages.config was correct… WTF
ProjectFile_02

I decided to do a build anyway just to see what would happen, as you may have guessed it failed.

I finally decided I was going to remove the config manually for the old version.

Once I removed the OLD version in the config my project built just fine.

The moral of the story is this. If you get odd missing packages errors from NuGet manually check your project file to see if it is actually configuration correctly.

Till next time,

Posted in NuGet, Visual Studio | Leave a comment

Fluent 2014, Splunk for JavaScript developers, and a dash of Hypermedia

Recently a bunch of my fellow Splunkers and I had the pleasure of attending Fluent 2014 in San Francisco where we held down our Splunk booth. It was a great event!

While at the booth I had the privilege of being interviewed on what Splunk offers of JavaScript developers. It turns out we offer quite a lot. Check out the video to learn more:

It was great to talk to customers that are using or interested in Splunk. Anyone who used it loved it, point blank! Working the both was also a chance to educate developers on what you can do with our developer platform, which our CTO Todd Papaioannou did a great job of in his talk on Big Data and Splunk.

As to the talks, I didn’t get to see many but did see a bunch of keynotes of which here are my favorites (with links where appropriate)

  • Scott Hanselman’s talk entitled “Virtual Machines, Javascript and Assembler“. He did a fantastic and highly entertaining talk (in a way that only Scott can do) on how web (in particular JavaScript and cloud development has radically changed the way we build software, and how it has deeply impacted the Microsoft platform.
  • Lea Verou’s keynote on the “Humble Border-Radius“. I had seen this talk online, but there was something magical about watching it live. It was kind of a comedy show through code, and it had some great information about the design of Border-Radius. Lea is an excellent and engaging speaker.

After the event I managed to sneak in a last minute treat before jumping on a plane. I went to see Mike Amundsen‘s (co-author of RESTful Web APIS) talk on building Hypermedia systems at the API-Craft meetup at Heroku,  Mike speaks with authority, and experience. He discussed the problems hypermedia is trying to solve and then walked through how to build a hypermedia client and server all in JavaScript. There was great conversation after the talk a combination of folks sharing their experiences as well as their skepticism ;-)

And as a bonus I got to see some great friends in the community like Ward Bell (author of Breeze.js), Chris Patterson (author of MassTransit), and Steve Klabnik (author of Designing Hypermedia APIs). I also got to meet some people in person for the first time, namely Mark Foster who has been very active in the ALPS and Blueprint space and Emmanuel Parasakis, the organizer of the meetup.

Shot of Chris, Mike, Chromatic Ward, Steve, Mark, and Emmanuel:

IMG_1616

A great way to end a great trip :-)

Posted in splunkdev, Uncategorized | Leave a comment

Sublime is sublime Closing

Well its an early morning. I can blame the travel from London for that. I managed to struggle through to the end of the second period watching the canadiens game last night. I was a bit worried entering the third but was quite happy to see they won when I woke up :)

In this post I just want to sum up the other posts from the sublime series as well as add a few tidbits. In the post series we have learned how to setup sublime for .net development. We have covered how to setup project/solution support. How to get intellisense and some basic refactoring. Even how to get automated builds and tests running (all in linux).

We have also looked at a lot of other things that are built on top of sublime that are fairly useful if you are doing other types of development such as javascript or html5. Many of these tools far outclass the Visual Studio equivalents and are usable with many other environments (such as using a ruby backend).

I have personally given up on using Visual Studio as a whole. I will however keep a vm with it on it for some very specific tasks that it does well (such as line by line debugging). These are not things I use in my daily worksflow but are nice to have when you absolutely need them.

Some other changes have come about in the use of sublime as my primary editor. A big one is that when I am writing one off code (which I do alot) I do not bother creating project or solution files any more. I instead just create C# files then either run the command line directly to the compiler or create a small make file. It sounds odd but its actually much simpler than creating project/solution files overall.

There will also be much going on in this space coming up. As of now the sublime plugin supports maybe 20% of what omnisharp is capable of. There will be quite a bit of further support coming in. As an example I was looking the other day at supporting run tests in context from inside of sublime (in test->run test, on fixture->run fixture). There is also much coming in for refactoring support and my guess is that you will see even more coming in on this due to nrefactory moving to roslyn. I think within a year you will find most of this tooling built in.

Another thing that I added to sublime though there isn’t really an official plugin for it yet is sublime repl + script cs. I find it quite common to grab a function and work it out in the repl first and then move it back into the code. A perfect example of this happened to me while in London. I was trying to combine to Uris and was getting some odd behaviour. Three minutes in the repl showed exactly what the issue was.

Moving to sublime will change the way that you work a bit but is definitely worth trying. Remember that a primary benefit of working in this way is that everything that you are doing is a composition of pieces that will also apply to any other code you happen to be working on (whether its C/Ruby/Erlang/even F#).

Posted in Uncategorized | Leave a comment

.NET Developer Tooling: The Roslyn Revolution

Unless you’ve been on holidays for the last week on an island without internet,  you’ve probably heard that Microsoft announced that Roslyn is now available as Open-Source.

Roslyn is the next C# and VB.NET compiler, developed with these languages. It is not just a compiler but also provides an extensive set of APIs to do plenty of interesting stuff on source code including, intellisense, refactoring, static analysis, semantic analysis… This paradigm is called Compiler as a service.

Until now (I mean until Roslyn will be integrated in VisualStudio) the offer proposed by Visual Studio concerning these areas was pretty limited. If you need more than basics (in other word if you are a seasoned pro .NET Developer that value its time) you are certainly using daily Jetbrains Resharper or DevExpress CodeRush. But with Roslyn finally becoming a reality, these two well established tools teams are facing a dilemma. To use Roslyn or to not use it??!

A risky decision must be taken

  • Don’t use Roslyn, and take the risk that the Roslyn revolution will challenge the leading position. Another risk, more subtle but it’ll be very noticeable, is that Visual Studio will host in-memory the Roslyn infrastructure. So not using Roslyn will result in asking users to host two compiler/analysis infrastructures in its VS process. This will have a crucial consequence on both memory-consumption and performance. Not using Roslyn is the choice of the R# team and I am worried a bit since I am a big fan of R#.
  • Do use Roslyn, and take the risk to spend a massive refactoring effort for at least one release cycle, effort that could be spent on new features to increase competitiveness. Also, while Roslyn have been developed by some of the smartest brains on earth, it might not be suitable for doing everything. R# developers are claiming that their solution-wide analysis is not really doable with Roslyn. But investing into such huge refactoring is the choice of the CodeRush team. One important advantage with relying on Roslyn is that CodeRush will have future language features almost for free. This can pay off over the long term.

While my preference goes today to R#, it is likely that in five years from now, there will be one leading tooling built upon Roslyn, to achieve all these goodies (refactoring, analysis…). Will it be:

  • A VisualStudio extension developed by Microsoft. I have no idea if such project exist but it’d make sense. At least a partial product must be actually developed I guess.  A partial product that is not trying to implement all what CodeRush or R# can do, but at least increase substantially the actual basics  provided by VS out-of-the-box today.
  • CodeRush
  • A VisualStudio extension developed by a third company, we haven’t heard about yet.

Also we must take account that CodeRush and R# are actually super-mature products. Both teams are taking high risks for the future and it is not clear that another product (developed by the VS team or another team) might one day achieve such level of maturity.

Concerning NDepend

NDepend has never been meant to compete with CodeRush nor R#. Some few features overlap, but many NDepend users are also using CodeRush or R# (like me actually).

NDepend is super-optimized to do macro-analysis. It does check hundreds of solution-wide code rules per second, while CodeRush and R# can do solution-wide checks, but more like in a matter of minutes on real-world large code base.

CodeRush and R# are brilliant to achieve micro-analysis, like showing to the user within the code editor that something can be improved in a method body. NDepend touches some of these areas, but a tool like CodeRush or  R# prove to be invaluable to obtain smart advises in all situations. And they come with the wonder of refactoring.

NDepend offers the necessary ten-thousands foot perspective, in order to take the most relevant refactoring decisions, like about what to do to make a component more cohesive, more re-usable, less coupled, less spaghetti-like. CodeRush and R# are excellent to actually achieve the refactoring once a decision has been taken.

Also, NDepend lets write custom code rules through LINQ queries. I believe our API, for that matter, is close to optimal. For ruling that Base class should not use derivatives it is as easy as writing:

No need to create a VS project. No need to click a compile or run button. Result is well formatted, well browsable and live updated if the rule gets edited. Such rule is executed in a few milli-seconds, even against a very large code base. And could the syntax be more concise? (thanks LINQ, thanks fluent API).

In addition NDepend has many other useful features out-of-the-box including code visualization (matrix, graph, treemap), code diff/baseline, code trending, reporting, code metrics including code coverage, tons of smaller-scale facilities to perform all sorts of convenient action… and many others features are in the development pipe. None of them really overlap with what other tools offer.

I went through all this description to say: NDepend is living in its own niche-area. So far it cohabits gently with CodeRush and Resharper and it doesn’t consume much extra memory nor extra performance. My opinion is that NDepend will cohabit well with Roslyn, without the need to be rebuilt upon it. If you don’t really agree with this, we’d be glad to hear your stance.

Posted in CodeRush, Compiler Service, LINQ, NDepend, Resharper, Roslyn | Leave a comment

Extending Splunk’s search language with custom search commands.

I recently blogged on a feature which I worked on, custom search commands for Splunk in our Python SDK. Custom search commands allow you to extend Splunk’s search language with new commands that can do things like apply custom filtering, perform complex mathematical calculations not in the box, or generate events dynamically from an external data source like an external API. Currently we only support them in Python but we’ll be adding more languages in the future.

In the first post of the series I talk about how to build a simple generating command.

Screen-Shot-2014-04-14-at-3.52.49-PM

http://blogs.splunk.com/2014/04/14/building-custom-search-commands-in-python-part-i-a-simple-generating-command/

Posted in splunkdev | Leave a comment