Brendan Enrick

Daily Software Development

Test Driven Development is Very Approachable

Othello_board Growing up I always enjoyed a good game of Othello. The rules of the game are very simple, but it takes a lot of work to become really good at Othello. There is a great tagline used to describe this game; “A Minute to Learn...A Lifetime to Master”.

The same can be said about Test Driven Development. In fact, when NimblePros hires interns in the summer, we go to recruiting events and set up laptops for people to write some code using TDD. When there are no students in the room, we’re the ones working on the code. Even our seasoned developers need to practice using TDD or we’ll never become better at using it.

The great part about this is that you don’t need any experience with TDD in order to sit down and start. You don’t want to sit down and start using TDD on legacy code, but simpler problems are a great way to get started.

We like doing these exercises as it allows us to not only see who can write code, but also who is willing to take the leap and try new things. This is one of our self-selecting techniques of evaluating people. Would you hire an employee not willing to go out on a limb and try something new? I wouldn’t.

I have two points to make: it’s easy to get started with TDD and if you don’t practice it you’ll never master it.

Sure, I can grab students from a college who may have only heard of unit testing in a class and have never really implemented it, but will those students ever get good at TDD. I think that they will if they work hard and actually use TDD. You have to practice it though. TDD will slow you down at first, which is why you need to practice it.

Time-Tested Testing Tips – Part 7

At last night’s Cleveland .NET Special Interest Group I met Russ Nemhauser who gave a great beginners’ talk on Testing, TDD, and Mocking. One thing I noticed that he did when he was showing how to write unit tests was kind of interesting. I admit I usually take tests one at a time when I get my idea for the test I want to write I just go ahead and start writing it. When he was demoing he started by brainstorming a handful of tests and made comments for each test name.

Normally what I would do is sit for a second and think of the few cases of each type I want to test. So I might sit and think, “OK, which edge-cases exist that I need to watch out for”. Then I would go and write just the ones I deem likely to ever come up. I’m obviously not going to test every case out there.

So I tried combining his technique with mine to see what kind of result I would get, and it worked out pretty well. Even though I’ve not been using it long I’ll post this technique here.

Brainstorm and Prune

Start by coming up with some functionality you want to add. brainstorm a bunch of tests you could write to check a lot of different cases. Make a bunch, because it doesn’t take much time. Write the names of these stormed tests as commented lines as you come up with them.

Make sure when brainstorming that you’re getting some important types of tests: some testing the happy path, some testing the sad path, and some testing edge-cases. This will get you a good mix to choose from. Now obviously if you came up with only a few things to test then write them all! If you came up with a lot of tests then pick which ones are most likely to cause problems if they misbehave and also the ones which are most likely to occur and write them.

Don’t Write Too Many Tests For One Method

I need to be careful how I say this. I want people to write a lot of tests, however, a lot just means you should be testing pretty much everything you can. You need to be careful not to write too many. This is why I say to prune in the previous suggestion. Certainly it takes very little time to add a few extra test cases, but you’ll be unhappy if the method signature changes. Or if the business logic changes. Remember that one of the big reasons we test is to mitigate the cost of change. Testing should make it easier to change the code, but if you write to many it will make things harder.

If you wrote a dozen tests for a method and then you add in a new parameter then congratulations you now have a dozen tests which you need to modify. That is kind of painful. Perhaps if you had only written six tests and left off some of the redundant ones you could have saved yourself a lot of extra work there.

As a general rule I think that if you ever need more than about a handful of tests then your code is probably too complex. It is time to think about the single responsibility principle and figure out how you can break things apart, because perhaps you were correct that a dozen tests were needed. It just happened that those dozen tests should have been testing three different methods with four for each one.

Let Pain Guide Your Learning

When I was learning to test I did a lot of stuff wrong. Testing was painful. I knew that it was beneficial, but I didn’t usually like doing it. Why? Well because I was not doing it correctly. When I first started testing I was using unit and integration tests without really realizing that I was doing too many integration tests. Why? Because I didn’t know how to do things correctly. After a while I learned how to write more unit tests and it was great for a while, but I did a mistake that I have now seen many others make. I shifted my dependency onto something else. After a lot of work I now think I know how to test reasonably well, and I avoid making mistakes by remembering how painful a similar approach has been in the past.

I think that if you haven’t felt the pain of testing badly you’ll never understand how to do it write. When testing is painful you’re probably doing something wrong. This is the sign that you should try something else on your own or ask someone else how to better handle your current situation. Someone can sit here like I am now and tell you how to do something, but until you see why you need to do it that way it is hard to agree. I hope someone uses my testing advice here and when they run into some problem that they remember something in these testing posts and adjusts what they’re doing accordingly.

Time-Tested Testing Tips – Part 6

If you haven’t seen my earlier posts on this topic. I’ve mentioned a bunch of tips which should make your testing easier and more effective. If you’re looking for more tips check out these previous posts.

Time-Tested Testing Tips - Part 1

Time-Tested Testing Tips - Part 2

Time-Tested Testing Tips - Part 3

Time-Tested Testing Tips - Part 4

Time-Tested Testing Tips - Part 5

Don’t Repeat Yourself

I am sure a lot of you have heard this before and know that Don’t Repeat Yourself (DRY) means that you should reuse code so that you’re not repeating the same code all of the time. This also applies to testing. Sure we know to extract the logic in our tests, because we know that test code should be treated very well.

I am not talking about the logic in your tests not being repeated. I am saying that you should not test the same thing twice. Try to only test each thing once. This makes identifying the issue a lot easier. This is extremely important for edge cases. Make sure you’re only testing your edge cases in your edge case test.

You should have a test for each edge case and preferably it is the only test using parameters for the edge case. If something else is using it you’re creating difficulties in maintenance as well as tracking down bugs.

If you break a piece of code dealing with an edge case it will be harder to track down since two tests will be failing as a result. If there were only one it would be easier to find and fix the issue. Also there is the simple fact that things change. What happens if later on the desired logic of the program changes and the handling of the edge case changes. Well now you have to change the code in two places instead of just the one. This makes it more difficult to change the logic of your program. repetition == bad

Start Writing Tests For Everything

When you’re going to try some new piece of code how do you go about it? I am sure a lot of people will create a small sample Console Application and try something and have it print out results to the Console. This works very well, but if you’re really trying to pick up testing you should try writing a test instead. If you want to see how some new functionality works give it a shot in a test. Write a test to see what happens. Assert on the results and try to predict the output. This is a fun little way to work with things and you’ll get more experience using your testing framework.

It is important when learning to write tests that you do it as often as possible. Get used to writing them whenever you write code, so when you’re looking at something new you might as well look at it with a test. What better way to “test it out” than using your unit tests to do it.

Have a great day! Keep testing!

Moving Away From Large Event Handling Methods

A big issue which can be seen when looking at a lot of ASP.NET code is the classic "do everything" method. This is often some kind of an event handler. It is sometimes one for the page such as with Page Load. Other times it is a control on that page that owns the event. Either way this is a nasty piece of code whether you're testing or not.

These dangerous pieces of muck are difficult to work with, and I'll say they scare the crap out of me when I need to work with them. So I am sure everyone knows why they are difficult in general to work with. Obviously there is a lot of code there in one place doing a lot of different stuff.

When we think about what might be in one such method we come up with a lot of nasty stuff.

Nasty Stuff Sometimes Found in Large Event Handlers (Some of these are much worse than others)

  • Business Rules Logic
  • Dependencies on page controls and their properties
  • Knowledge of the underlying data persistence layer
  • Dependencies on the server context
  • Dependencies on configuration information
  • Complicated UI logic determining how the page should be rendered

So I am sure I've missed some other really nasty thing which could have been in this method. As with any large method this is obviously violating the Single Responsibility Principle. For the sake of this blog post I am going to define the method we are talking about as a click handling event for a save button on a web form.

So obviously we should extract a method from this to get the "Complicated UI logic determining how the page should be rendered" into a method which has anything to do with UI logic. Since we are on the page, we can keep the UI logic here. This is the UI layer after all.

Running through each of these and getting them into the right place could take us a while, so I'll just be covering a couple of them here.

Removing the Dependency on the Data Persistence

This is one of the most important steps we can take to improve our code quality. Removing the database dependency will assist us in keeping our concerns separated and shrinking the size of this monster method. By removing this dependency we are better able to test this code. We want to make sure that we are coding against an interface giving us access to the data we need. This will allow us to remove the dependency on our data when we are testing.

I don't care exactly how you do it, but this needs to be broke apart. I generally use the word "Repository" when referring to a class that will get and store my data. I like this word because it is a non-specific word meaning that it is a persistence class. There are a lot of implementations of repositories. If you want to learn more then read about repositories on Google. A lot of very smart people have written things for and against the repository pattern.

We want to use this pattern because the data accessing code and the persistence layer can be moved into the repository. This lets us remove that code from the nasty method.

When we write tests we will use a mock, fake, or stubbed version of the repository so we don't have to maintain a database.

Removing the Dependency on the Web

Since we are working with web forms here we have a strong dependency on our web form when we are in the event handler in the code behind of our page. In ASP.NET MVC this is not as much of an issue. A lot of people don't realize that a lot of the stuff that makes MVC so nice is that it is actually just helping to guide us towards a lot of these principles and rules we already tried to follow. This separation and everything is right out of the Single Responsibility Principle.

When we are breaking apart our code we want to have a lot of small interconnected pieces of code working together to achieve something great. However to get to that point we should be moving in small steps. One of the best ones in this circumstance is to pull as much of the logic as possible away from the web form.

To achieve this separation we will create a class we will call a "controller". If we make the class WebForm1Controller we can put a method in there that handles this event for us. It will, however, take in the values it needs from the Controls instead of having access to the controls itself. It can also take in any values it needs from the HttpContext or anything else. The point of this is that this code in the controller can run without actually having the web portion of your code. (THIS LETS YOU TEST IT!)

So you might be thinking at this point that all we have done is moved the code. Well sort of. What we did was limit our dependency on the web. We did this by making sure that all we needed was the values. You are right that we could have kept this as a separate method in the code behind. The difference is that in this new class we do not even have access to the controls so no one can try to directly access them from here. We are also more easily able to create instances of this class. The one associated with a web form has a lot more red tape to deal with than our freshly created controller class.

You might now ask if we are done fixing this code. Certainly not. We have much more to do, but the code is better. As I've said before, it is a lengthy process to write better code. If you're not working towards writing better code every time you write code, you're probably making code worse. Try making one thing cleaner, nicer, more concise every time you're working with the code. Even if you're just extracting a small method or renaming something, you're making things better. Don't be discouraged if you can't do a large refactoring in one sitting. Notice here that we aren't done, but we've made the code a lot better. In fact, I bet we could test the controller's business logic now.

Looking at Testing in Other Fields

Software development is not alone in the world. Surprised aren't you? There are other fields which exist. Most of these fields have been around much longer than we have, and sadly they tend to do things better than we do in a lot of ways. Heck they're much more mature than we are. I am not saying there is anything wrong with how we are doing things, but I do believe that our field can grow and develop a great deal still. We need to be looking at these other fields though and see how they do things. We need to take what they do and apply what we do well in order to become truly great.

A lot of people have read about "The Toyota Way". In fact it is on my reading list, and I plan on getting to it sometime soon. There is a lot than can be learned from other industries I believe. Lean thinking works no matter what field you're in. From the Wikipedia entry on this topic we can find this as "Principle 8".

Use only reliable, thoroughly tested technology that serves your people and processes.

Now can someone tell me what the 5th word is in this quote? Yes, that is correct. The word is "tested". Think about how cars are made here for a minute. Why don't we start at the high level and work our way down. We will start with our full system integration tests.

The car crash test: When test crashing a car there is no one behind the wheel. Why? I know of at least two reasons; it is dangerous and more importantly it needs to be identical every time. So what do they do? They have a test harness in place that allows them to automate the test yielding an identical test every time. They could have remote controlled the cars or something, but that would allow for too much human error.

Testing individual pieces: The different components of cars are tested for defects. Usually there is some machine that comes along and does something like applying pressures in a stress test. They could have a person go out and apply the same tests, but they don't. Why? If you're trying to have a consistent test it must be automated. Is that the only reason? Of course it is not. Automated tasks are also much, much faster. Machines are much faster than humans.

So aren't all of these machines expensive? Yes, of course they are. Don't you think it is expensive to build a machine that crashes cars? It is also expensive to have to crash perfectly good cars just to see how they respond in an accident.

Software test harnesses are not expensive. They are code. What happens if you try to make your code throw an exception while in a test harness? NOTHING. You run the code again.

Testing software is cheap. Other industries test there stuff as well. They aren't testing in the same way we do. Why? Because we have this huge advantage they don't. Our test harnesses are just software. Our stuff is cheap and reusable. We can run our tests all the time and the only resource we use is electricity. Much cheaper than the rubber and steel the car industry needs.

Time-Tested Testing Tips - Part 5

Welcome back for another exciting tip for those developers writing unit tests. Today we will be looking at assertions in unit tests.

Only Assert On One Case Per Test

When people start writing unit tests they do the natural thing with their Assert statements; they write a bunch of them in the test. This makes a bunch of sense, because you don't want to repeat yourself in your tests. This is not bad as long as all of the asserts are asserting the same expectation with different values. If you are testing different cases then they need to be separate test.

The reason for this is that unit testing frameworks tend to abandon a test as soon as an assertion has failed, so you get less information if you have grouped many tests cases into one test method. By making sure that every test method contains only assertions dealing with the case in question, we will garner valuable information from each test. Maybe we will discover that the cases with negative numbers and the edge case, zero were the ones to fail. If we had grouped the assertions into one method, we would have only had one failed assertion and wouldn't have gotten all of this info. For example we might have only had zero fail if its assert was first, and then we would be thinking this was a problem with the edge case when it was really a problem with all numbers less than one.

Group Your Asserts Together

If you can, try to keep your asserts together at the end of the method. This will make them much easier to read and keep track of. I've seen some tests where someone wanted to assert before changing a value. A better way to handle this is to store the value you need to hang on to in a variable and assert at the end with everything else. This will make your tests a lot more maintainable, and someone down the road editing the test will sure thank you.

Keep Your Assertions Concise

You should have no logic in your assertions. Don't ever access a property or call a method in an assert statement. These are what someone looks at to try and figure out what is going on. Use simple, concise variable names which explain things. If assertions are cluttered and confusing, your tests will be very difficult to use.

Remember to be careful with your asserts. They should be clean, easy to read, and managed nicely. Their maintenance is extremely important to your tests.

Writing Clean Code is a Process

I was copied on an email recently from someone reluctant to begin writing unit tests for code. One of the complaints about the idea of starting late game adding in testing was an interesting one. The person mentioned that because they're starting to test so late that they will not get "all the benefits of TDD". Well, that person is correct. However, that should not stop anyone from making things better.

Recently the "boy scout rule" was brought up to me in the context of coding, and I admit I'd never thought of how well that example applies. The scouts have a rule to leave a camp site cleaner than they found it. This is a great idea since it means that gradually overtime you will be making improvements. No one is saying to go and spend a month cleaning things. That would be a HUGE waste of developer effort from the customer's perspective.

A good rule to live by and one I try to practice regularly is to refactor and clean a little bit every time you touch a file. Even if this just means adding a test or even renaming a bad class, interface, or variable name, the important thing is to make these minor improvements every time you get into some piece of code.

Another complaint about switching was that developers would be spending 50% of their time writing tests. Well I will say that that isn't quite right, but the idea there is accurate. The amount of code written is about half test code and half production code. The advantage of having the tests is less time spent debugging and fixing. Most developers have heard about the cost to fix bugs at different points during the development lifecycle, but I will summarize it as, "the sooner you catch a bug the cheaper it is to fix". If we have tests in place we find the bug sooner. This means the end cost is less, so writing the tests should save us money during development as well as during maintenance.

I admit when I first got introduced to testing I thought a lot of the same stuff. I figured the tests would slow down the development and wouldn't help very much. I figured there would be all these issues, and I'll also admit that it is pretty tough to start doing. Once you start writing tests for things you start to see some of the benefits. The best thing to see is when you change some piece of code and something seemingly unrelated has a test break. That is when you realize the connection and prevent a bug from being created. That is one of the best aspects of testing.

Keeping Code Out of the Code Behind

In ASP.NET development (and yes even in MVC) each page is able to have associated code. This is traditionally (before MVC) how someone would add code to a page. This was much nicer than what was seen a lot on the classic ASP days when a lot of code would be littered between HTML controls.

However, when one is introducing code into the code behind great care must be taken. This is because the code behind is really not the location where most logic goes. If we take a moment to think about what the ".aspx" file is, we will probably come up with something along the lines of, "the file where we decide how we're going to display things to the user." So this is about display issues.

Should it know anything about a database?


Should it know that there is any form of persistent storage?

Probably not.

So what should it know about?

It needs to know how to display things to a user and that is it.

So what do I do once I've got code in the code behind that is business logic?

Step 1 is to try to get it refactored into the correct class. Create new classes to handle the business logic. Your pages will call these classes to do the required work.

If you don't know where to put something yet there is an intermediate step better than having it in the code behind. You can make pseudo-MVC by creating another class. So create a class with the same name as the page but add "Controller" onto the it as a suffix. Then put the logic in there. This will give you a bit of seam which should allow you to test the code.

Keep in mind that the MVC pattern is very cool. ASP.NET is very cool, but it is a fairly old pattern. The idea of it is just separating things. We can do the same thing with ASP.NET if we are careful not to clutter things. We can achieve similar results. Sure it isn't as nice and pretty as MVC, but it gives us seams while separating the concerns of our code so that it is maintainable.

Something to Avoid While Programming

Some activities and thoughts need to be avoided while programming. Sometimes we realize our mistakes and do them anyway. I admit that in my time developing software I’ve probably done this more times than I would like to admit. I make an effort to avoid making these types mistakes, and I attempt to encourage others to avoid the same pitfalls. While working with others I certainly tell them when they’re gravely mistaken about something.

The first thing I would like to emphasize is that this mistake is fairly common everywhere in life. It is of course procrastination, and I think I’ll rip off the classic line I’ve heard a million times. Don't put off until tomorrow what you could do today. Now I need to be careful how I say this, because I could get YAGNI people jumping at me for saying this. Obviously don’t do everything today just because you can. That is silly, but things that will assist greatly in maintenance should not be put off. Since I’ve been on a bit of a testing kick lately, I might as well focus this toward testing.

I am sure a lot of you have been in the situation where you really just don’t feel like writing your test before you write your code. Ha! Who am I kidding? Most developers don’t follow TDD anyway, so most of the people reading this wouldn’t have done that anyway. A large group of developers have started writing unit tests in general even if they’re not writing them first. However, this still applies as I am sure many of those developers want to wait until a full feature is “complete” before testing.

Your code is not complete until it is tested. Keep in mind that down the road you might forget the business rule you were coding, so it is important to create the tests at the same time as you’re writing the production code. Developers will try to say, “we will write the tests when we’re done” or “we’ll refactor this later”. Do not dare believe a word of that crap. Always assume you will not have time to come back later and refactor. There is a good chance you will not have that chance. Also if you don’t fix things now they’ll just bite you later.

One way to help keep yourself on track is to work with a partner. One very powerful aspect of pair programming is that the second person will nag you and force you to get to things now. That is part of the job description for both parties while pair programming. Don’t let the other person skip out on anything. Be careful though, because that other person will be just like Wimpy saying things like, “I’ll gladly test that Tuesday to continue coding today.”

Writing Testable, Maintainable Code

At our company we’ve had a few interns start this summer, so we ran a little workshop with the development team to help teach the team a few things about writing tests before writing other code. In our test driven workshop we did some simple problems in teams. We looked at different problems from Project Euler, and solved them in pairs. We all worked out our own tested solutions to the problems and brought them back to discuss. While looking at these we discussed what was good in each one and what could have been done better. We were pairing more experienced testers with the new guys for this exercise.

I think this went very well actually. These might not be production level challenges to work, but the variety of designs gives great insight into what people were thinking and how they were approaching the problem. I was delighted to see that there were small bits of code very similar between designs, but overall none of them really looked similar. Class names here and there might have been the same, but the implementations of classes with the same name could vary wildly.

After we finished with the exercise, we jumped back into teams to continue working. A while later I got into a discussion with one of our full-time developers and an intern where I ended up explaining one of the main reasons we like testing code. There is the obvious security of having the tests, but there is more than just that. One added bonus that not everyone seems to realize is that writing testable code creates some interesting properties in the code.

Why don’t we take a look at a few of the properties of testable code. I will certainly not cover all of them, but I will try to get enough to demonstrate my point. (these are in no intentional order)

A Few Properties of Testable Code

  • Keeps dependencies to a minimum – Having fewer dependencies means less mocking, faking, and stubbing.
  • Follows Single Responsibility – This keeps things as small pieces which makes for smaller easier to understand and maintain tests.
  • Programming is done against interfaces – This allows you to mock and fake these objects, because your code only knows about the interface.
  • Dependencies are injected into classes – Without doing this unit testing is impossible and only integration tests could be written.
  • The code is well documented through the tests – The tests themselves describe how the code works, and this documentation stays up to date.

The neat part about this is the fact that these properties are present in maintainable code as well as testable. I'll sit here a moment while that sinks in..... Yes, you heard right that in order for code to be testable it must also be maintainable. If you ask someone how to write maintainable code, they might be able to spout off some information about theoretically how to do it. What is much harder is actually implementing solutions which are maintainable. This is why testing is important, because if you want to even be able to write the tests, the code first needs to be written in nice decoupled, well-organized ways.

I don't know about you, but I think it is very cool when you see that the properties of testable code seem to coincide with best practices for writing maintainable code.