CaubleStone Ink

.Net development and other geeky stuff

nAnt: Getting your machine ready

Posted on January 27th, 2009


In this post we will get your pc ready to use nant.  The examples and links will be geared towards VS2008 however they will also work for VS2005.  VS2003 requires a bit more but can be done in very similar ways.

  1. If you have not already done so go get nAnt.  This example and all future ones will use version 0.85
  2. Extract the file to a location on your c: drive like c:\tools\nant
  3. Add the location of the bin folder to your path.  So in this example it would be: c:\tools\nant\bin or c:\tools\nant\0.85\bin depending on how you extracted your files.
  4. Copy the file nant.xsd from the c:\tools\nant\schemas folder to the following location(s):
    • C:\Program Files\Microsoft Visual Studio 8\Xml\Schemas — for visual studio 2005
    • C:\Program Files\Microsoft Visual Studio 9.0\Xml\Schemas — for visual studio 2008

Now that you have done all that we are going to open visual studio. This next bit will work in either 2005 or 2008 so the instructions will be the same. Once you have opened visual studio.

  1. Go to the Menu:  Tools -> Options
  2. Navigate in the left hand tree to the Text Editor section
  3. Select the File Extentions item.  Your options dialog should look like this:
    options_dialogIn the Extension box I want you to enter the word: build

    • Note: I use build as the extension for my build files for nant vs xml so I know what they are out of the box.
    • For this option dialog you do not put any periods in the extension box.
  4. Next select XML Editor from the list and then hit the Add button.  Your options dialog should then look like this:
    options_dialog_with_buildNow select OK and off we go.

That’s it for the setup.  If you want to test this add a new XML File to a project in Visual Studio and call it something like default.build.  You should then get intellisense for your build files.  The root level object will be project.  As you will notice because we added the file extension you also get the auto-complete benefits of the xml editor.  Things like adding the closing tag, quotes, etc.

That takes care of the first part of our nAnt setup.  In the next article I’ll walk you through adding Item and Project templates that you can use and customize for easily adding build files and projects that already have build files to your project.  After which we will get into the usage of nAnt and setting up a common build file that you can reuse from project to project.

Building your code or CI and you

Posted on January 27th, 2009


I’ve been seeing alot of stuff on the web lately considering continuous integration (CI), automated builds, build tools, unit testing, etc.  Figured maybe it’s time I start to post about some of this stuff.  I’ve been using CI in various shapes and sizes as it were for many years.  From custom rolled solutions to full commercial packages.  As such I will be posting many articles around CI, builds, unit testing, etc to help people who maybe have never seen it before or even if I’m lucky have an answer to a problem you have been having.

First let me say that I don’t care if you are a single person shop, team, department, or whole company.  You NEED to be using some form of CI.  There are free versions out there like Cruise Control.net and TeamCity (which is now free for limited installations).  I personnally have setup a TeamCity installation on my big developer desktop and I’m just a one person show for the stuff I do at home.

So, what is CI and why should you use it. CI is a means by which you can have an autonomous process running somewhere either on your machine, server, cloud computing platform, server farm, you name it that will take your code and compile it.  Big whoop-dee-do you say I can do that by just building from my desktop in my IDE.  Ok, that’s great if you are a one person shop.  But what happens if you don’t get the latest from your source control and yeah it builds with the code you have but your buddy in the cube across from you just changed everything.  Now your build won’t work but you don’t find out till later when somebody says something.  This is where CI comes into play.  It does not remove the build check from your box.  You should always do that.  Where CI comes in is a sanity check and a means to automate tedious tasks.  Using the scenario above what happens when two people check in code at the same time such that you both think everything is working but then in the morning you get latest and bam nothing compiles.  Wouldn’t it have been nice to have something email you telling you that it broke and maybe even why.  What about unit tests.  Do you use them, run them, all the time, some of the time, etc.  You could have all this automated for you upfront.

Now setting up a CI system is an upfront task, yes it takes some time, yes there could be integration issues with your code base, yes you may need to change the way you build code.  But in the end it’s all worth it.  Once you get onto a CI system and everything is up and running you will start to get a sense of peace.  Not only that but you will quickly come to rely upon it.  It becomes that great little tool that you wish you had found sooner.

Now the catch.  If you don’t use a Source Control provider at the minimum CI wont’ do much for you.  You really should be using source control.  This comes back into the you MUST do this category.  Again I don’t care if you are just one person or a whole company.  You NEED source control.

Why is this important if you are a single person.  Well what happens when you inadvertantly delete a folder with code and you had no backup.  Come on how many tech people do you know who actively backup there stuff?  If it’s not automated we don’t usually do it.  Let’s even say that you are working on some project that you might want to sell.  How do you know if you have everything.  Just because your folder is there doesn’t count.  What happens if you need to have somebody help you code the app, so you just went from being a one person show to a small team.  If you have source control your golden.  Just give them access and away you go.  It is an important process and does not need to be strictly used for source code.  I’ve used it for word docs so I can go back and pull say a version 1 of a requirement spec to show the business unit / partner how things have changed over the course of say a year or even a month.  You just never know.

CI needs source control.  It provides a means by which you can have a 3rd party verify your code base.  In a single person shop it can tell you if you have really checked in all your code.  So let’s say you the one man team are working on two or more computers.  If you have source control it’s easy to share all your code across the wire and know that you have everything because your CI build is working based on what is in the repository.

Another good use of CI is that in most tools you can even setup scheduled builds, say like a nightly build.  What if you have a project where you want to provide a nightly build to people.  You could setup your build script to actually take all your code build it and zip it.  The CI server can do this for you.  So no late nights, waiting for people to finish.  You can then even have the CI server send you emails when it succeeds or fails.

The advantages to running CI and source control are to many to number.  If you haven’t ever used them I suggest you do so.  Go get Subversion and TeamCity at the minimum and install them.  Work through them, play with them, and use them.  They will save you time, money, and effort in the end.

In the next posts I hope to show you how to get a subversion installation up and running on Windows along with configuring TeamCity to use the same setup.  I’ll even start posting some topics on using nAnt and unit testing.

The more we can test and automate our build and deployments the more time we can spend actually coding our solutions.

Application Helpers: Protect your method calls

Posted on January 26th, 2009


First thing, refactor, refactor, refactor. Most ideas come from refactoring of your code. It’s amazing how much you do when you start refactoring things. The big benefit of course is less code. If you have a couple hundred thousand lines of code or even ten thousand lines of code, refactoring even a small thing can cut down your code base and improve readability. This article is geared towards refactoring.

In the course of working on multiple applications over the years you find you tend to do the same things over and over again. One of those things is constantly checking objects for null, data type compatibility, etc. How many times have you written the following lines of code.

if (object == null)
{
   throw new ArgumentNullException();
}

or

if (object.GetType() == typeof(sometype))

The amount of code there justs adds up. Not only that but you probably have lots of null checks maybe even in the same method. So what if we wrote something like this instead:

Protect.IsNotNull(obj, "The object you are checking is null");

So if you had multiple objects going into an object you would have something like this:

Protect.IsNotNull(obj1, "Object 1 is null");
Protect.IsNotNull(obj2, "Object 2 is null");

Now since we are talking about refactoring what if we even refactor that code to make use of the newer features of the framework. Since the Protect methods throw an exception if the object is null in this case what if we wanted to expand it to allow for multiple checks. We could do something like this:

Protect.Against(obj1 == null, "Object 1 is null");
Protect.Against(String.IsNullOrEmpty(stringData), "The string is emtpy.");

As you can see we can now handle multiple things at the same time. So let’s look at what is under the Protect method.

public static class Protect
{
  public static void Against(bool condition, string message) where T: Exception
  {
    if (condition)
      throw new (T)Activator.CreateInstance(typeof(T), message);
  }
}

As you can see not much but it makes your code easier to read and it adds a nice bit of conformity while reducing the total lines you need to maintain and worry about. The T parameter is the type of exception you want thrown based on your condition. As long as you have a boolean condition your good. Anything that evaluates to true will throw the exception. The message will be on the exception that is thrown. Based on this pattern we can also add things like Inheritance checks, Type checks, Enum, checks, etc. It’s really easy for it to morph into what you need as you need it. You can even extend it with delegates, Func<> methods, setup for linq style syntax, etc. the sky is the limit.

Here is what it would look like in use inside of a method:

public void DoSomething(List data, Dictionary other)
{
    Protect.Against(data == null, "Data is null");
    Protect.Against(other==null, "Other is null");

    foreach (int i in data)
    {
        Console.WriteLine(other[i]);
     }
}