There are still some parts of test driven development that I'm not 100% comfortable with. Sometimes, for example, I’ll be writing a unit test for a method in class A that, it turns out, needs to call a method in class B to do its thing. Only problem is that class B’s method doesn’t exist yet. So I stub it and my test fails. Hurrah.
Now, however, I’ve got methods in two classes that aren’t fully implemented. Really, the method in class B should be covered by its own unit test before I start calling it from elsewhere. So I write a failing test for that method too. I now have 2 failing tests at the same time. This is something that, from my understanding of the literature, should be avoided. So, what should I do to avoid it? [more...]
One solution might be to write class B and its methods before I even touch class A. However, if I do that, I’m risking creating an interface that isn’t suitable for use by class A. I’d be trying to predict future use of the class. Instead, shouldn’t I be driving the development of the class B’s interface by developing class A and uncovering its interface requirements?
Of course, if I do develop class A first, we get to the situation described above: multiple failing tests. In fact, it could easily cascade further if class B’s method required a call to a method in class C. Et cetera, ad infinitum. So, what do I actually do?
At the moment, the approach I’m taking is the first one. I have multiple failing tests. I find this preferable as it avoids trying to predict required interfaces—something I’ve already had bad experiences with. However, as soon as I realise I’ve got to write a second unit test before getting the first to pass, I give the first an Ignore attribute in NUnit. The text I put in the Ignore attribute is something like "Added to Ignore stack: 28-Sep-05 11:30". Once I get the second test to pass, I look back for any ignored tests in the NUnit GUI and continue work on the test with the most recent Ignore stack date.
One thing that this approach requires, however, is that you don’t have any other tests ignored in your test suite. Otherwise, it’s too easy to forget the Ignore stack exists. If you’re currently ignoring a set of tests based on them taking a long time to run, think about using the Category attribute instead.
April 2002 May 2002 June 2002 July 2002 August 2002 September 2002 October 2002 November 2002 December 2002 January 2003 February 2003 March 2003 April 2003 May 2003 June 2003 July 2003 August 2003 September 2003 October 2003 November 2003 December 2003 January 2004 February 2004 March 2004 May 2004 June 2004 July 2004 August 2004 September 2004 October 2004 November 2004 December 2004 January 2005 February 2005 March 2005 April 2005 May 2005 June 2005 July 2005 August 2005 September 2005 October 2005 November 2005 December 2005 January 2006 February 2006 March 2006 April 2006 May 2006 June 2006 July 2006 August 2006 September 2006 October 2006 November 2006 December 2006 January 2007 February 2007