Brendan Enrick

Daily Software Development

Overusing Interfaces and Injection

In software development, it is very important to continually learn and experiment with new techniques for building great software. Our job is to learn and improve the quality of code that we build over time. It is through this progression of experimentation, failing, succeeding, observing, reading, listening, and teaching that we as developers improve. In this post, I am going to discuss one way in which we as developers have learned something great and have taken it to the point of failure. This will allow us to learn, resolve, create new techniques, and succeed in the future.

What is Dependency Inversion?

One of the most important SOLID principles for testing our applications is the Dependency Inversion principle. I am very glad to see many more codebases using this technique as time goes on, however, I am seeing nearly as many others getting caught up in an easy mistake of using too much injection and putting interfaces on everything regardless of necessity. Lets start by defining dependency inversion so we’re all on the same page.

From Wikipedia:

In object-oriented programming, the dependency inversion principle refers to a specific form of decoupling where conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are inverted (i.e. reversed) for the purpose of rendering high-level modules independent of the low-level module implementation details. The principle states:

A. High-level modules should not depend on low-level modules. Both should depend on abstractions.
B. Abstractions should not depend upon details. Details should depend upon abstractions.

Reading directly from this, we can see that it says we should be depending on abstractions and in doing so we take a much lighter dependency. A dependency on an abstraction is not a big deal. A dependency on a concrete class is a big deal.

Is what I just said always true? No, it depends on the abstraction and the concrete class. For example, I make an exception for simple classes provided through the framework or language my code is primarily based on. For instance, I have no problem saying that my method depends on being given a DateTime object since there is no real gain by abstracting that object since that is a part of the framework and I will always be injecting in an object of that type.

When are interfaces and injection being overused?

One example of over-architected injection that I see a lot comes from something that I’ve taught many times when explaining SOLID. It’s the classic example of a dependency that people don’t realize they have.

DateTime.Today

I assume most of my readers know that this is a dependency taken directly on the system clock. It’s important to inject this dependency when your code behaves differently based on the value returned from that static method. When we extract that dependency, people have been taught (by many people including me) that a good way of testing this is to have some kind of IClock interface with an implementation of SystemClock accessing the computers clock by calling that static method we were just discussing.

Now when we write our method, we just give it a parameter of IClock. When we run our code in production, it’s taking in the SystemClock object, and in our tests we’re using a mock object. Great! We’ve reversed the dependency! Is that really what we wanted though? It might have been, but in some cases we could have just passed in the DateTime instance object returned from that method. This would have reduced complexity in our code.

Because we used this interface instead of just depending on simple concrete classes, we’ve actually made our code more difficult to use and maintain. Isn’t that the whole reason we were following SOLID in the first place? Yes, yes it is. This is of course a simple example, but you can find larger examples of where you could test your code and build in a nice structure without having to create pointless interfaces. Heck even in this example, you might have a good reason to need that Interface, but I am sure that you could get away with using it less. Every time you add that layer, you add complexity. Sometimes adding it is OK, because it gives you a much needed seam, but I doubt you need it as often as you think.

Please make sure that you need the interfaces and layers of abstraction before creating them. I’ve come across many codebases that become as complex or more simply by trying to do things “the right way”.

The road to software development hell is paved with good intentions.

 

Don’t like the way I phrased something? Think I overgeneralized? Did I make a mistake?

Call me out on it!

I appreciate the feedback.

Overmocking

One of the most powerful tool available to developers testing a legacy code base is the ability to mock out classes that their code depends on. This is of great importance since our unit tests need to limit the scope, and we do this by trying to limit our dependencies. Through mocking we can exchange one dependency on the infrastructure of our application for an in-memory mock.

Sometimes mocking is overused, and I am not just talking about cases where every objected gets mocked to the point where we’re testing nothing. I am talking about a different issue in mocking. I am talking about where developers put on their mocking blinders when unit testing. It’s an easy thing to do, and I’ve done it plenty of times myself.

Setting Up Our Example

We will start of by defining our example. We will have a method to do some sort of calculation (an ideal and easy testing scenario). It will be a legacy code base, which means that it is untested code and probably hard to test. We’ll start off by making it a private static method and put in a nice dependency that one might try to mock to avoid.

private static decimal CalculateFooOnXyz(Xyz xyzItem, 
decimal calculationParameter1)
{
var numbersInCalculation = Repository.GetNumbers()
.Where(n => n.IsActive);

foreach (Number number in numbersInCalculation)
{
// Some code that executes for each one.
}
}

Now that we have this method, which is scary and really needs some testing we’ll take a look at the simple way of testing it; we will use parameter injection to pass in the dependency since we’re static we not easily able to do any other safe forms of dependency injection. When we do this, we end up with the following code.
 
private static decimal CalculateFooOnXyz(Xyz xyzItem, 
decimal calculationParameter1, IRepository repository)
{
var numbersInCalculation = repository.GetNumbers()
.Where(n => n.IsActive);

foreach (Number number in numbersInCalculation)
{
// Some code that executes for each one.
}
}

This is a pretty simple change. We now mock out the repository and pass in the mock in our test and we tell it instead to return the collection we specify in memory instead of requesting the data from the database.

Why This is Wrong

My first question is, what is the name of the method? CalculateFooOnXyz. Notice that I didn’t say, “GetDataFromTheDatabase”. That’s because we shouldn’t be doing that. It isn’t part of the calculation. The numbers returned from the database are required for calculating, so that means that we should have that object as our dependency instead.

How We Change It

So instead of making the repository our parameter, we should make the collection of numbers our parameter. This way we’re depending on the in-memory collection of CLR objects. This is much less of a dependency, and in is not one that needs to be mocked. By doing this alternative we’re better following the Single Responsibility and Dependency Inversion principles.

Our best code for testing will look like this.

private static decimal CalculateFooOnXyz(Xyz xyzItem, 
decimal calculationParameter1, List<CalcNumbers> numbersInCalculation)
{
foreach (Number number in numbersInCalculation)
{
// Some code that executes for each one.
}
}

Comments? Have a better way of doing it? Did I make a mistake?

Custom Model Binders in ASP.NET MVC

In ASP.NET MVC, our system is built such that the interactions with the user are handled through Actions on our Controllers. We select our actions based on the route the user is using, which is a fancy way of saying that we base it on a pattern found in the URL they’re using. If we were on a page editing an object and we clicked the save button we would be sending the data to a URL somewhat like this one.

 

Notice that in our route that we have specified the name of the object that we’re trying to save. There is a default Model Binder for this in MVC that will take the form data that we’re sending and bind it to a CLR objects for us to use in our action. The standard Edit action on a controller looks like this.

[HttpPost]
public ActionResult Edit(int id, FormCollection collection)
{
try
{
// TODO: Add update logic here

return RedirectToAction("Index");
}
catch
{
return View();
}
}

If we were to flesh some of this out the way it’s set up here, we would have code that looked a bit like this.

[HttpPost]
public ActionResult Edit(int id, FormCollection collection)
{
try
{
Profile profile = _profileRepository.GetProfileById(id);

profile.FavoriteColor = collection["favorite_color"];
profile.FavoriteBoardGame = collection["FavoriteBoardGame"];

_profileRepository.Add(profile);

return RedirectToAction("Index");
}
catch
{
return View();
}
}


What is bad about this is that we are accessing the FormCollection object which is messy and brittle. Once we start testing this code it means that we are going to be repeating code similar to this elsewhere. In our tests we will need to create objects using these magic strings. What this means is that we are now making our code brittle. If we change the string that is required for this we will have to go through our code correcting them. We will also have to find them in our tests or our tests will fail. This is bad. What we should do instead is have these only appear on one place, our model binder. Then all the code we test is using CLR objects that get compile-time checking. To create our Custom Model Binder this is all we need to do is write some code like this.
public class ProfileModelBinder : IModelBinder
{
ProfileRepository _profileRepository = new ProfileRepository();

public object BindModel(ControllerContext controllerContext,
ModelBindingContext bindingContext)
{
int id = (int)controllerContext.RouteData.Values["Id"];
Profile profile = _profileRepository.GetProfileById(id);

profile.FavoriteColor = bindingContext
.ValueProvider
.GetValue("favorite_color")
.ToString();


profile.FavoriteBoardGame = bindingContext
.ValueProvider
.GetValue("FavoriteBoardGame")
.ToString();

return profile;
}
}

 

Notice that we are using the form collection here, but it is limited to this one location. When we test we will just have to pass in the Profile object to our action, which means that we don’t have to worry about these magic strings as much, and we’re also not getting into the situation where our code becomes so brittle that our tests inhibit change. The last thing we need to do is tell MVC that when it is supposed to create a Profile object that it is supposed to use this model binder. To do this, we just need to Add our binder to the collection of binders in the Application_Start method of our GLobal.ascx.cs file. It’s done like this. We say that this binder is for objects of type Profile and give it a binder to use.

ModelBinders.Binders.Add(typeof (Profile), new ProfileModelBinder());

Now we have a model binder that should let us keep the messy code out of our controllers. Now our controller action looks like this.

[HttpPost]
public ActionResult Edit(Profile profile)
{
try
{
_profileRepository.Add(profile);

return RedirectToAction("Index");
}
catch
{
return View();
}
}

That looks a lot cleaner to me, and if there were other things I needed to do during that action, I could do them without all of the ugly binding logic.

Wire up your ViewModels using a Service Locator

No MVVM solution is complete without having the DataContext bound to a ViewModel, but this wouldn’t be a fun development community if there were not some disagreement on the specifics of how to achieve certain goals. I will be recommending how I like to wire up ViewModels. There are other ways of doing this, but I will explain some of the reasons I use this method.

You can start by building a View that needs to have certain traits in its ViewModel and then create a well-tested ViewModel separately. This ViewModel should have all of the properties and data required by the View. Make sure you also have any commands or other functionality the View will require. It is then your job to make the connection between these two objects. The way I like doing this is by using a Service Locator to give my View the ViewModel it needs. This also gives me a good centralized location where I can make sure that my ViewModels are wired up the way I need them to be.

To create our service model we are going to need to create a class which has methods returning the ViewModels we are using in our Views. We should have one getter per ViewModel to be requested. I tend to use names matching the name of the ViewModel for the getters. The service locator will look a bit like this when you’re done. (You can also use an IoC container in the service locator, which is what I do in all of my production code. In that case you would just use the IoC container rather than instantiating the object as is done in this example.)

Code Listing 1 – The Service Locator:

public class ServiceLocator
{
public AwesomeViewModel AwesomeViewModel
{
get { return new AwesomeViewModel(); }
}
}

Notice that I can pass in any parameters needed for the ViewModel constructor so my ViewModel can depend on abstractions. The service locator can be more complicated than this if other work needs to be done to create these ViewModels. An example of such a situation is if I have shared dependencies or I am using an IoC container to create my objects.

Now that we have our code written to get us our ViewModel object we need to make this class available to all of our Views. This can be achieved by creating an instance of one of these as a static resource in our App.xaml file. Static resources defined here are easily accessible.

Code Listing 2 – Declaring the Service Locator as a Resource in App.xaml:

<Application x:Class="Awesome.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:Awesome="clr-namespace:Awesome"
StartupUri="MainWindow.xaml">
<Application.Resources>
<Awesome:ServiceLocator x:Key="SvcLoc" />
</Application.Resources>
</Application>

The key that we have assigned to this resource is how we will reference it when we access it later. In other parts of our code we will just specify in our bindings that we are accessing this static resource and calling its properties to get the data that we need. In fact this is exactly how we can get our ViewModel into our View. When we bind our DataContext we will tell it that our source is going to be this ServiceLocator instance and that we are binding our DataContext to a specified path, which is a property of the ServiceLocator.

The binding can be done either of these two ways. I don’t really prefer one or the other. These are effectively the same things, so it really comes down to preference.

Code Listing 3 – Binding the DataContext to the ServiceLocator:

<Window.DataContext>
<Binding Path="AwesomeViewModel"
Source="{StaticResource SvcLoc}"/>
</Window.DataContext>

Code Listing 4 – Binding the DataContext to the ServiceLocator Inline:

DataContext="{Binding AwesomeViewModel, Source={StaticResource SvcLoc}}"

Now we can access this DataContext throughout our View by just binding to different paths on it.

I like this approach because it keeps the binding in the XAML and not in the code. It is also nice because we are able to bind easily to properties and have these properties do dependency injection for our ViewModels. The centralization of all of this ViewModel creation is also very convenient. We are able to visit this one class to make adjustments for how the entire application handles its ViewModel initialization.

Making Text Clickable in Silverlight for Windows Phone 7

I’ve seen a lot of snippets of code online where people are trying to make text clickable in Windows Phone 7, and plenty of them are using the OnMouseLeftButtonDown event to do it. Well to put this lightly, this is not the best way of handling a click in the Windows Phone 7 environment. The reason is that we have to put the “left button” down in order to scroll. The “left button” is our finger, so if we try to scroll down and press our finger on the text we will be activating the event by mistake.

In order to resolve this we need to have the click event. Well, the click event is on the Button not on the TextBlock. In this example I will be using the MVVM Light toolkit and showing how I can wire up a Click event to a command on my ViewModel.

This example is a DataTemplate being used to display a list of colors each one as a bound item in a ListBox. I will be setting the text of each item to be the name of the color and I will be handling the click event by binding it to a command on my ViewModel the command will take in the color’s ID as a parameter. Notice that since I am in a DataTemplate I have to access the ViewModel for this view by accessing my view. While in the DataTemplate my current DataContext is the Color item I am binding in the list. Read more about accessing the ViewModel from a DataTemplate.

<DataTemplate x:Key="colorListTemplate">
<Grid>
<Button>
<Button.Template>
<ControlTemplate>
<TextBlock Text="{Binding Name}" FontSize="64" />
</ControlTemplate>
</Button.Template>
<i:Interaction.Triggers>
<i:EventTrigger EventName="Click">
<cmd:EventToCommand
Command="{Binding DataContext.PickColor, ElementName=TheView}"
CommandParameter="{Binding Path=ID}"/>
</i:EventTrigger>
<i:Interaction.Triggers>
</Button>
</Grid>
</DataTemplate>

Notice that I set the ContentTemplate of the Button. This means that I am telling the control to not display how it would normally display and instead to display how I want it to. I do this so that I can display text as normal so that it looks like a have a regular list of items, and I am able to make this text handle the event when someone clicks on it. Keep in mind that a click and a scroll are two different operations in the Windows Phone, so non one is going to accidentally click on my item when they are trying to scroll. If I had done this with a MouseLeftButtonDown event as I’ve seen shown in the past it will cause the command to happen when someone is trying to scroll the list on the phone.
 
Remember that a button can look like anything in Silverlight. You have a ContentTemplate you can define for the button, so you have the power to always have a Click event. So if you need to make something clickable, make a button and put your control as the template for the button.

Unit Testing With a Base Test Class

Writing good code that can be trusted to work means unit testing your code. In order to effectively maintain these tests you will need to follow the same principles you would with any other code. This of course means that you extract logic to achieve code reuse, name methods and objects clearly, use composition, and use inheritance.

In this post I am going to show how you can get some code reuse when you’re following another good testing practice. You should keep things well abstracted. I like to keep my tests in folders and I keep test classes in those folders. I do this so that I can have each test class be small and only be testing one things. By doing this I make my test classes a lot cleaner and easier to work with. This means, however, that if I do not extract some of the duplication I could be creating more maintenance work later.

On way that I keep my tests more maintainable is to keep duplicated logic for initialization in base classes. When I am testing methods of a certain class, I will keep a folder of those tests and have a class for each area of the code I want to test. This lets me keep some of the initialization logic in a base class. Many of the test classes will need an instance of the object being tested, so I can put logic in the base class to initialize my mock objects pass them to the constructor of the class I will be testing.

Setting up a base class is easy. The following examples show how to set up a base test class in MSTest and NUnit.

Setting Up a Base Test Class with MSTest

[TestClass]
public class BaseTestClass
{
public BaseTestClass()
{
Console.WriteLine("BaseTestClass.Ctor()");
}

[TestInitialize]
public void BaseTestInitialize()
{
Console.WriteLine("BaseTestClass.BaseTestInitialize()");
}

[TestCleanup]
public void BaseTestCleanup()
{
Console.WriteLine("BaseTestClass.BaseTestCleanup()");
}
}

[TestClass]
public class ConcreteTestClass :BaseTestClass
{
public ConcreteTestClass()
{
Console.WriteLine("ConcreteTestClass.Ctor()");
}

[TestInitialize]
public void MyTestInitialize()
{
Console.WriteLine("ConcreteTestClass.MyTestInitialize()");
}

[TestCleanup]
public void MyTestCleanup()
{
Console.WriteLine("ConcreteTestClass.MyTestCleanup()");
}


[TestMethod]
public void TestMethod1()
{
Console.WriteLine("ConcreteTestClass.TestMethod1()");
}

[TestMethod]
public void TestMethod2()
{
Console.WriteLine("ConcreteTestClass.TestMethod2()");
}
}

MSTestBaseClassTestOutput

BaseTestSessionOutput

Setting Up a Base Test Class using NUnit

[TestFixture]
public class BaseTestClass
{
public BaseTestClass()
{
Console.WriteLine("BaseTestClass.Ctor()");
}

[SetUp]
public void BaseSetUp()
{
Console.WriteLine("BaseTestClass.SetUp()");
}

[TearDown]
public void BaseTearDown()
{
Console.WriteLine("BaseTestClass.TearDown()");
}
}

[TestFixture]
public class ConcreteTestClass : BaseTestClass
{
public ConcreteTestClass()
{
Console.WriteLine("ConcreteTestClass.Ctor()");
}

[SetUp]
public void SetUp()
{
Console.WriteLine("ConcreteTestClass.SetUp()");
}

[TearDown]
public void TearDown()
{
Console.WriteLine("ConcreteTestClass.TearDown()");
}


[Test]
public void TestMethod1()
{
Console.WriteLine("ConcreteTestClass.TestMethod1()");
}

[Test]
public void TestMethod2()
{
Console.WriteLine("ConcreteTestClass.TestMethod2()");
}
}

NUnitBaseClassTestOutput

BaseTestSessionOutput

Notice in these examples how the Base method setups happen first and then the local class ones. This allows you to depend on the code happening on the base class first. Then when you tear everything down you will first clear up things in the local method and then the base class will run.

Make sure you follow this example by having different names for the base methods so they don’t collide with the local names. This lets you have them both run without one having to call the other. If the names match you will either be overriding the base or hiding it, but they will not both run.

Accessing the ViewModel Inside a DataTemplate in Silverlight

I’ve been doing a lot of Windows Phone 7 Development, which means that I have also been doing a lot of Silveright development, so here is a tip for accessing your ViewModel when you’re in a DataTemplate.

In Silveright, DataTemplates are used when binding data to a control. For example if I want to list users I will define the DataTemplate, which will define the XAML that will be bound to for each of the users in the list. When I do this, the data context for the DataTemplate is my user and no longer the VM. I have a few options here, I can modify the user to have what I need, I can access some global class which has what I need or can access my ViewModel, or I could do what I prefer doing, which is just to name my View.

I give my name a view, and I can then create a binding which accesses an element by name instead of by using its current data context. To do this I can just name my View like this.

<Views:ViewBase
...
x:Name="TheView"
DataContext="{Binding BarViewModel, Source={StaticResource SL}}">

 

Then in my binding I can access it using the ElementName property like this. This example is wiring up a Click event using MVVM Light Toolkit’s EventToCommand.

<Custom:Interaction.Triggers>
<Custom:EventTrigger EventName="Click">
<Command:EventToCommand
Command="{Binding DataContext.DoSomething, ElementName=TheView}"
CommandParameter="{Binding}" />
</Custom:EventTrigger>
</Custom:Interaction.Triggers>

 
This allows me to access the commands on my ViewModel while I am using a DataTemplate. Without doing the “ElementName=TheView”, I would not easily be able to access the command from my ViewModel. I would only be able to access the commands from the User object.

Using Dynamic Typing When an Interface was Needed

Interfaces and base classes allow us a great deal of power in object oriented programming. We are able to accept a base type or interface and be given an implementation or an inheritor and continue working correctly. What if, however, we need to be able to accept more than one type, which have the same methods or properties, but to not share an interface or base class? Often the best answer is to add a common interface to these, so that the shared behavior is defined. In most of my cases when I need to do something like this, it is because I don’t have the source code for one or more of the classes.

Our answer in this case is the dynamic types which we added in to C# 4.0. In this new revision of the language, we are able to declare an object deferring its type until runtime. We will define how we will use the object now, and at runtime our code will attempt to use that object.

The following is some example code showing how to use the dynamic keyword to use two classes interchangeably:

class Program
{
public static List<DateTime>
Dates = new List<DateTime>();

static void Main(string[] args)
{
var someClass = new SomeClass(DateTime.Now);
GetEventDate(someClass);
var otherClass = new OtherClass(DateTime.Now);
GetEventDate(otherClass);

var values = Dates.Select(d => d.ToShortDateString());
Console.WriteLine(string.Join("---", values));
}

private static void GetEventDate(dynamic objectWithDate)
{
Dates.Add(objectWithDate.Date);
}
}

public class SomeClass
{
public DateTime Date { get; set; }

public SomeClass(DateTime date)
{
Date = date;
}
}

public class OtherClass
{
public DateTime Date { get; set; }

public OtherClass(DateTime date)
{
Date = date;
}
}

So in this example, if we say that I didn’t have access to OtherClass in order to give it some interface, using the dynamic would allow me to get around this and use a convention-based approach to development. When the alternative is some cluttered approach, I am always in favor of simplifying the readability and usability of my source code. Now there is less of a reason to complain that a certain class from a library or generated code doesn’t implement an interface. (There is still reason, but at least we can avoid some of the issues at play.)

Quick Silverlight Tip: Looking at the code

Silverlight is a great technology, and one thing that really makes it a treat to work with is the ease with which one can access the code inside the XAP file. Yes, this means that someone can look at your code,  SO DON’T PUT ANYTHING SECURE IN THERE! I took Jeff Blankenburg’s Click the Button contest.

So for a while I though Jeff had tricked me on that puzzle until I opened up the XAP file and looked at the source code. On the last page of the Silverlight you see this screen.

Level 12

So what’s the big deal? Well, the level before it was titled “Level 10”, so I figured Jeff was trying to pull a fast one and there was more to this seemingly innocent page. So I tried to find how to get beyond this level 12 to the real one. I eventually took the smart route, and I opened .NET Reflector and pointed it at the DLL file for the contest.

How did I do that you ask? Here are some quick steps for looking at the source code of a Silverlight application.

Step 1: Download the .xap file. View the source of the HTML page or look at the NET tab in FireBug to get the .xap file and download it.

Step 2: Rename the file to .zip and open it. Yes, a .xap is really a .zip neat huh?

Step 3: Unzip the file and find the dll you’re looking for. In my case it is called “ClickTheButton.dll”

Step 4: Open up .NET Reflector and open up that dll file.

NewDllInReflector

Step 5: After examining the dll you’ll be able to see all of the code. In this case I can see that Jeff actually has 13 levels he created for this application.

You now have a choice to make. You can ask Jeff Blankenburg what happened to the other puzzles or just go and take a look at the code and see which levels are missing.

When Should You Comment Your Code

Comment your code when it is hard to understand or determine your intent, because your code is crap. Yep, that is pretty much the best time to comment your code. When I was in college I was told to make sure that I commented my code. I always wondered why. Now I know. Comments really tended to clutter things and make it less clear what I was trying to achieve. I’ve written plenty of comments, but I know that when I use a comment that it means my code sucks.

Comments take time to write, they take up valuable space, they add clutter, and they’re often out of date. So why do we write them? We write them because there is something confusing about some code. Or something that we need to tell future readers about the code.

I very much enjoy being reassured of the way I do things by the books I read. I’ve been reading Code Complete, and it makes me glad they I blatantly disagreed with the people who told me to comment my code. Why? Because the author of the book tells me the exact opposite. I’ve often used the phrase, “code should be self-documenting.” I am not sure from whom or from where I first heard that, but it also affirmed my belief about comments.

The proper use of comments is to compensate for our failure to express ourself in code. Note that I used the word failure. I meant it. Comments are always failures. We must have them because we cannot always figure out how to express ourselves without them, but their use is not a cause for celebration.

Aside from the misuse of the English language, that is an excellent paragraph. I’ve never liked writing comments, and I doubt I ever will. It is interesting how some programmers swear by them. I think they are the same programmers who use single-letter variables, magic numbers, and hundred line methods.

I highly recommend reading through the best comments in source code question on StackOverflow.

Remember kids, anytime you feel the need to write a comment, fix your code instead. You’ll thank yourself down the road. You might be the future developer maintaining the code.

A few steps to follow. (These are the easy ones. Harder ones take more than a bullet point to explain and justify.)

  • Rename variables to more expressive names.
  • Extract methods so people know what you’re doing.
  • Extract classes to handle each responsibility.