Minimal

stressed for motivation and achievement

2005-09-28

 

Stacking unit tests

There are still some parts of test driven development that I'm not 100% comfortable with. Sometimes, for example, I’ll be writing a unit test for a method in class A that, it turns out, needs to call a method in class B to do its thing. Only problem is that class B’s method doesn’t exist yet. So I stub it and my test fails. Hurrah.

Now, however, I’ve got methods in two classes that aren’t fully implemented. Really, the method in class B should be covered by its own unit test before I start calling it from elsewhere. So I write a failing test for that method too. I now have 2 failing tests at the same time. This is something that, from my understanding of the literature, should be avoided. So, what should I do to avoid it? [more...]

One solution might be to write class B and its methods before I even touch class A. However, if I do that, I’m risking creating an interface that isn’t suitable for use by class A. I’d be trying to predict future use of the class. Instead, shouldn’t I be driving the development of the class B’s interface by developing class A and uncovering its interface requirements?

Of course, if I do develop class A first, we get to the situation described above: multiple failing tests. In fact, it could easily cascade further if class B’s method required a call to a method in class C. Et cetera, ad infinitum. So, what do I actually do?

The Ignore stack in action

At the moment, the approach I’m taking is the first one. I have multiple failing tests. I find this preferable as it avoids trying to predict required interfaces—something I’ve already had bad experiences with. However, as soon as I realise I’ve got to write a second unit test before getting the first to pass, I give the first an Ignore attribute in NUnit. The text I put in the Ignore attribute is something like "Added to Ignore stack: 28-Sep-05 11:30". Once I get the second test to pass, I look back for any ignored tests in the NUnit GUI and continue work on the test with the most recent Ignore stack date.

One thing that this approach requires, however, is that you don’t have any other tests ignored in your test suite. Otherwise, it’s too easy to forget the Ignore stack exists. If you’re currently ignoring a set of tests based on them taking a long time to run, think about using the Category attribute instead.


Comments:
Test-driven development isn't about having tests for evey method - it's about writing code only when there's a failing test. The new method in class B will be covered by your existing test (assuming you write only the minimum code required). So I would just rely on the one test, and try very hard to ensure I don't code ahead.

That thought, in turn, opens another. How do you /know/ you need the new method on class B? Is it impossible to write inline in class A? Are you really taking the very simplest route to a passing test?

Imagine you could write all of the necessary code in class A. In this case, the one test is sufficient. Then you decide to refactor, and move part of the code into class B. Would you add a new test at this stage? Even though you have complete test coverage?
 
Hmmm... you're right to point out that I may be skipping a spot of refactoring by not writing the code inline. However, assuming the code did eventually make its way to a method in class B, I'd still feel pretty uncomfortable not having B's interface documented directly with tests of its own. If B were to be reused in other projects, that project's documentation would be lacking.

Or are you saying that I'd only reuse B elsewhere by way of refactoring and that I would therefore already have tests covering the code it was replacing?
 
I'd say that documenting B's interface is a separate story.

(And yes, any other TDD'd code that ends up using B will already be tested.)

Don't think of TDD as a testing technique: it's a /design/ technique. You're using "tests" to design the interface of A (and refactoring to design the interface of B and its relationship with A). Have you looked at BDD?
 
Have you looked at BDD?

No, I haven't. But I will now - thanks. Is Dave Astels' stuff a good place to start? Or do you know of better intros/references?

Oh, and thanks for taking the time to check back here again.
 
Gah! How rude of me. Think I'd better check your site too. :)
 
Dave Astels is one of the very few people writing about BDD just now.

But his recent article did start a flurry of threads on extremeprogramming and agile-testing over at Yahoo. They aren't overly enlightening just yet, however...
 
Post a Comment



<< Home

Archives

April 2002   May 2002   June 2002   July 2002   August 2002   September 2002   October 2002   November 2002   December 2002   January 2003   February 2003   March 2003   April 2003   May 2003   June 2003   July 2003   August 2003   September 2003   October 2003   November 2003   December 2003   January 2004   February 2004   March 2004   May 2004   June 2004   July 2004   August 2004   September 2004   October 2004   November 2004   December 2004   January 2005   February 2005   March 2005   April 2005   May 2005   June 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2006   April 2006   May 2006   June 2006   July 2006   August 2006   September 2006   October 2006   November 2006   December 2006   January 2007   February 2007  

This page is powered by Blogger. Isn't yours?