To Assert or Not To Assert

posted on February 9th, 2009 ·

by Miško Hevery

Some of the strongest objections I get from people is on my stance on what I call “defensive programming”. You know all those asserts you sprinkle your code with. I have a special hate relationship against null checking. But let me explain.

At first, people wrote code, and spend a lot of time debugging. Than someone came up with the idea of asserting that some set of things should never happen. Now there are two kinds of assertions, the ones where you assert that an object will never get into on inconsistent state and the ones where you assert that objects never gets passed a incorrect value. The most common of which is the null check.

Than some time later people started doing automated unit-testing, and a weird thing happened, those assertions are actually in the way of good unit testing, especially the null check on the arguments. Let me demonstrate with on example.

class House {
  Door door;
  Window window;
  Roof roof;
  Kitchen kitchen;
  LivingRoom livingRoom;
  BedRoom bedRoom;

  House(Door door, Window window,
            Roof roof, Kitchen kitchen,
            LivingRoom livingRoom,
            BedRoom bedRoom){
    this.door = Assert.notNull(door);
    this.window = Assert.notNull(window);
    this.roof = Assert.notNull(roof); = Assert.notNull(kitchen);
    this.livingRoom = Assert.notNull(livingRoom);
    this.bedRoom = Assert.notNull(bedRoom);

  void secure() {

Now let’s say that i want to test the secure() method. The secure method needs door and window. Therefore my ideal would look like this.

testSecureHouse() {
  Door door = new Door();
  Window window = new Window();
  House house = new House(door, window,
             null, null, null, null);;


Since the secure() method only needs to operate on door, and window, those are the only objects which I should have to create. For the rest of them I should be able to pass in null. null is a great way to tell the reader, “these are not the objects you are looking for”. Compare the readability with this:

testSecureHouse() {
  Door door = new Door();
  Window window = new Window();
  House house = new House(door, window,
    new Roof(),
    new Kitchen(),
    new LivingRoom(),
    new BedRoom());;


If the test fails here you are now sure where to look for the problem since so many objects are involved. It is not clear from the test that that many of the collaborators are not needed.

However this test assumes that all of the collaborators have no argument constructors, which is most likely not the case. So if the Kitchen class needs dependencies in its constructor, we can only assume that the same person who put the asserts in the House also placed them in the Kitchen, LivingRoom, and BedRoom constructor as well. This means that we have to create instances of those to pass the null check, so our real test will look like this:

testSecureHouse() {
  Door door = new Door();
  Window window = new Window();
  House house = new House(door, window,
    new Roof(),
    new Kitchen(new Sink(new Pipes()), 
           new Refrigerator()),
    new LivingRoom(new Table(), new TV(), new Sofa()),
    new BedRoom(new Bed(), new Closet()));;


Your asserts are forcing you to create so many objects which have nothing to do with the test and only confuse the reader and make the tests hard to write. Now I know that a house with a null roof, livingRoom, kitchen and bedRoom is an inconsistent object which would be an error in production, but I can write another test of my HouseFactory class which will assert that it will never happen.

Now there is a difference if the API is meant for my internal consumption or is part of an external API. For external API I will often times write tests to assert that appropriate error conditions are handled, but for the internal APIs my tests are sufficient.

I am not against asserts, I often use them in my code as well, but most of my asserts check the internal state of an object not wether or not I am passing in a null value. Checking for nulls usually goes against testability, and given a choice between well tested code and untested code with asserts, there is no debate for me which one I chose.

→ 31 CommentsTags: Uncategorized

How to run FlexUnit in a continuous build

posted on February 1st, 2009 ·

So you have a flash or Flex code with lots of unit-tests. That is great, but how do you get them running in a continuous build? Sure you can run the tests manually, but this only gives you a visual green or red bar, what you really want is a X-Unit XML file so that you can run the test in an automated fashion and display or email the result to everyone on your team. The problem is that flash runs in a sandbox and as a result the FlexUnit is not allow to tell anyone about the success or failure of any test, which makes publishing the results kind of hard.

Enter Adobe Air to the rescue! Adobe Air is a flash player, which is outside of the sandbox. As a result Adobe Air application can write to local file system. And because Adobe Air is flash it is 100% compatible with your flash / Flex code which you are trying to test. Here is what you need:

Basic idea is straight forward. Use an ant script to compile your ActionScript code together with unit tests into an Air executable. The executable runs the tests with special XMLListener which records the tests into X-Unit complaint XML file and writes it out to the filesystem. The Air application than quits with status code which equals to the number of test failures. The continuous build can than use the status code and the XML report to publish the test results.

For extra points…

The same problem (the results are stuck in browser) exists when testing JavaScript code. Lucky for us Adobe Air includes WebKit. This means that the same Air test runner can execute your JavaScript tests as well. Upon the completion of JavaScript tests the test runner can traverse the HTML DOM and collect the JavaScript results and publish them in the same way.

– Enjoy…

→ 5 CommentsTags: Uncategorized

When to use Dependency Injection

posted on January 14th, 2009 ·

by Miško Hevery

A great question from the reader…

The only thing that does not fully convince me in your articles is usage of Guice. I’m currently unable to see clearly its advantages over plain factories, crafted by hand. Do you recommend using of Guice in every single case? I strongly suspect, there are cases, where hand-crafted factories make a better fit than Guice. Could you comment on that (possibly at your website)?

I think this is multi-part question:

  1. Should I be using dependency-injection?
  2. Should I be using manual dependency-injection or automatic dependency-injection framework?
  3. Which automatic dependency-injection framework should I use?

Should I be using dependency-injection?

The answer to this question should be a resounding yes! We covered this many times how to think about the new-operator, singletons are liars, and of course the talk on dependency-injection.

Dependency injection is simply a good idea and it helps with: testability; maintenance; and bringing new people up to speed on new code-base. Dependency-injection helps you with writing good software whether it is a small project of one or large project with a team of collaborators.

Should I be using manual dependency-injection or automatic dependency-injection framework?

Whether or not to use a framework for dependency injection depends a lot on your preferences and the size of your project. You don’t get any additional magical powers by using a framework. I personally like to use frameworks on medium to large projects but stick to manual DI with small projects. Here are some arguments both ways to help you make a decision.

In favor of manual DI:

  • Simple: Nothing to learn, no dependencies.
  • No  reflection magic: In IDE it is easy to find out who calls the constructors.
  • Even developers who do not understand DI can follow and contribute to projects.

In favor of automatic DI framework:

  • Consistency: On a large team a lot can be said in doing things in consistent manner. Frameworks help a lot here.
  • Declarative: The wiring, scopes and rules of instantiation are declarative. This makes it easier to understand how the application is wired together and easier to change.
  • Less typing: No need to create the factory classes by hand.
  • Helps with end-to-end tests: For end-to-end tests we often need to replace key components of the application with fake implementations, an automated framework can be of great help.

Which automatic dependency-injection framework should I use?

There are three main DI frameworks which I am aware off: GUICE, Pico Container and Spring.

I work for Google, I have used GUICE extensively therefor my default recommendation will be GUICE. :-) However I am going to attempt to be objective about the differences. Keep in mind that I have not actually used the other ones on real projects.

Spring was first. As a result it goes far beyond DI and has everything and kitchen sink integrated into it which is very impressive. The DI part of Spring has some differences worth pointing out. Unlike GUICE or Pico, Spring uses XML files for configuration. Both are declarative but GUICE is compiled and as a result GUICE can take advantage of compiler type safety and generics, which I think is a great plus for GUICE.

Historically, Spring started with setter injection. Pico introduced constructor injection. Today, all frameworks can do both setter and constructor injection, but the developers using these frameworks still have their preferences. GUICE and Pico strongly prefer constructor  injection while Spring is in the setter injection camp. I prefer constructor injection but the reasons are better left for another post.

Personally, I think all of the three have been around for a while and have proven themselves extensively, so no matter which one you chose you will benefit greatly from your decision. All three frameworks have been heavily influenced by each other and on a macro level are very similar.

(I personally would not chose Spring because XML and setter injection puts me off. However I am looking forward to using Pico on my next open-source project so that i can become more objective about the differences.)

Your milage may very.

→ 19 CommentsTags: Advice

Testability – re-discovering what we learned and forgot about software development

posted on January 13th, 2009 ·

[Reposted from:]

by Christian Gruber

(or, why agile approaches require good old-fashioned O-O)

What are we all talking about? (the intro)

Testability comes out of an attempt to understand how agile processes and practices change how we write software. Misko Hevery has written some rather wonderful stuff on his blog, and starts to get into issues around singletons, dependencies, and other software bits that get in the way of testability, and starts to look at testability as an attribute. (Full plug at the end of this post) But in particular, he also starts looking at what design and process changes can we start to use to make code more testable. And while we’re at it, what is the point? Is testability the point? It’s important, especially as a way to remove barriers from working in an agile environment, if that’s what we’ve chosen. There are reasons related to quality as well. But I think there are some deeper implications, which Misko and others have implied and, on occasion, called out. It’s that we’ve forgotten the point of Object-Orientation and what it was trying to achieve in the 80′s and 90′s, and are re-discovering it.

What have we forgotten? (the reminiscence)

But what is the essence of what Misko is saying? Martin Fowler (coiner of the Inversion of Control term) and others have written wonderful articles on “Law of Demeter” and other principles. In general, they are all looking at how we grow software, and between all the thinkers and talkers and doers, it seems to me we’re re-discovering some key concepts that we all learned in college but forgot in the field. The key points are:

  1. Manage complexity by separating concerns and de-coupling code.
  2. Map your solution to the business problem.
  3. Write code that is not brittle with respect to change.
  4. Use tools that empower your goals, don’t change your goals to fit the limits of the tools.

In other words, what we all learned when we were taught textbook Object-Oriented Analysis and Design and Programming. Now, O-O has had its various incarnations, and I would contend that all the architectural threads of AOP, Dependency Injection, as well as a more conservative take on O-O have all stemmed from these key principles which were at the core of the Smalltalk revolution and early attempts to get rapid development cycles, and what is often now called “agile” development.

How did we forget all this? (the rant)

So why did we forget all this? Five reasons, I suspect:

Selling Object-Orientation

We did a crappy job of selling O-O. You might not think so, since O-O is so prevalent (or at least O-O languages are). However, we didn’t sell the 4 notions I mentioned above, we sold business benefits that were, in essence, lies. They didn’t need to be, but usually were. These are things like “O-O will make you go faster because of reuse,” or “O-O will help you reduce costs because of reuse,” and on and on. These can be true, but are usually the result of a longer evolution of your software in an O-O context, and the costs of realizing the benefits that we sold often would be too high for businesses to stomach. In fact, O-O won, in my view, because managing complexity became fundamentally necessary when software scale became huge.

Cheap, fast computers

Fast computers have allowed us to do so many bad things. Room to move and space to breath unfortunately gave us less necessity for discipline. We removed the impetus to be efficient and crisp and to think through the implications of our decisions in software. However, we’re catching up with hardware in terms of real limits. Moores law may still apply, but we are starting to have limits in memory. A client of mine observed that Google is an interesting example. He works for an embedded software firm, and noted that they have probably similar scaling issues as an embedded (say, phones or similar devices) device company, because sheer volume of traffic forces Google against real limits, much the way resource constraints on a telephone forces those companies against their constraints. However most of us live in a client-server mid-level-traffic dream of cheap hardware so that we can always “throw more hardware at it”.


Java is an O-O language, and really was the spear-head that won the wars between O-O and structured programming in the ’90s. However, bloated processes and territorialism have kept Java from fixing some of its early issues that prevented it from solving problems such as I mention above in efficient ways. The simple example is reflection. If a langauge requires that I create tons of boilerplate code (try-catch, look-up this, materialize that) to find out if an object implements a method, and then to invoke it, it needs to provide a way for me to eliminate the boilerplate. If it can’t do it in a clean way, it should at least provide convenience APIs for me to do the most common operations without all that boilerplate. Sadly, the core libraries of Java were bloated even in the beginning, because of the tension between the Smalltalk, Objective-C people on one hand, the C++ people on the others, and Sun not caring really, because they were a hardware company. So because Java won the O-O war (don’t argue, I’m generalizing) its flaws became endemic to our adoption of O-O practices. I’m going to mention that J2EE bears about half of Java’s responsibility, but I’ll leave that for another flame. Nevertheless, the design and coding and idiomatic culture that spawned from these toolsets have informed our approaches to O-O principles for over a decade.

The .com bubble

The dot-com bubble compounded our Java woes by introducing 6-month diploma programmers into the wild – nay, into senior development positions, and elevated UI scripting a-la JSP and ASP, which allowed for the enmeshment of concerns beyond anything we’d seen for a while in computing. All notions of Model-View-Control separation (or Presentation, Abstraction, Control) were jettisoned while millions of lines of .jsp and .asp (and ultimately .php) script were foisted onto production servers, there to be maintained for decades (I weep for our children). While this was invisible in early small internet sites, the Dot-Com bubble which careened the internet into a primary vehicle for business, entertainment, culture, and these days even politics caused a growth in number, interaction, and complexity of these sites that has caused unmitigated hell for those who found their “whipped-up” scripted sites turn into high-traffic internet hubs. Much of this code has been re-written out of necessity, and yet it caused the travesty that is Model-1 MVC and other attempts to back-into good O-O practice from a messy start. These partial solutions were propagated as good practice (which, by comparison with the norm, they were) and a generation of students learned how to do O-O from Struts and other toolsets. Ignored in that process were wonderful tools like WebObjects or Tapestry which actually did a fair job of doing O-O AND doing the web, but I’ll leave that point here.

Design Patterns

A small corollary to the dot-com bubble is that combining Java, and Patterns concepts from the Gang of Four, these new developers managed to create a code-by-numbers style of design, where you don’t describe architecture with patterns, you design with patterns up-front. This has led to some of the worst architecture I’ve ever seen. Paint-by-numbers has never resulted in a Picasso or Monet, and rarely results in anything anyone would want to see except the artist’s mother. Design Patterns and pattern-languages aren’t bad – far from it. However, they are a language to discuss architecture, they are not an instruction manual. They should be symptoms of a good software design, not an ingredient.

Really big, bloated projects

Lastly, really really big projects have taken all of the above and raised the stakes. We are now finding that the limits of software aren’t the Hardware (thank you Moore), but rather the people. A whole generation of us attempted to solve this by increasing the process weight around the development effort. This satisfied some contractual issues with scale, but in general failed to attend to the issues raised in The Mythical Man-Month, despite 40 years of its having been published.

A side-effect of really big projects is that when you have that much money on the table, risk-mitigate goes into high-gear, and people are bad at risk analysis and planning. We tend to manage risk by telling ourselves stories. We invent narratives that help us manage our fears, but don’t actually manage risk. So we make very large plans. Idealistic (even if they’re pessimistic) portrayals of how the project shall be. Then, because we want to “lock down” our risk, we solicit every possible feature that could potentially be in, including, but not limited to, the kitchen sink, to make sure we haven’t forgotten it. It goes in the plan, but by this point we have twice the features any user will ever ever use and 80% of the features provide, maybe, 20% of the value. So we actually increase risk to the project’s success while we are trying to minimize and control it. This kitchen-sinkism leads to greater and larger projects, but then large projects bring prestige as well, so there are several motivation vectors for large-projects. Most of them aren’t good.

Enter Agile (the path to the solution)

Agile software started to address the human problem of software, and I won’t go into it much here, as it’s well covered elsewhere. However, one can summarize most agile methods by saying that the basics are

  1. Iterate in small cycles
  2. Get frequent feedback (ideally by having teams and customers co-located)
  3. Deliver after each iteration (where possible)
  4. Only work on the most important thing at a time.
  5. Build quality in.
  6. Don’t work in “phases” (design, define, code, test)

This is a quick-n-dirty, so no arguments here. It’s just an overview. But from these changes there are tons of obstacles, issues, and implications. They are, indeed, too numerous to go into. But a light non-exhaustive summary might include:

  • You can’t go fast unless you build quality in
  • You can’t build quality in unless you can test quickly
  • You can’t test quickly if you can’t build quickly
  • You can’t test quickly if you aren’t separating your types of testing
  • You can’t test quickly if your tests are manual
  • You can’t automate your tests if your code is hard to test (requires lots of setup-teardown for each test)
  • You can’t make your code more amenable to testing if it’s not modular
  • You can’t ship frequently if you can’t verify quickly before shipping
  • You can’t build quality in if you ship crap
  • You can’t get feedback if you can’t ship to customers
  • etc.

Lots of “can’t” phrases there, but note that they’re conditionals. Agile methods don’t actually fix these problems, they expose them, and help you do root-cause analysis to solve them. For example, look at some of those chains there.

If I, for example, take my “Big Ball of Mud” software system and re-tool it to de-couple it’s logical systems and components into discrete components in its native language (say, Java), then I suddenly can test it more helpfully, because I can test one component without interference from another. Because of this, my burden of infrastructure to get the same test value goes down. Because of this my speed of test execution improves. This causes me to be able to test more frequently (possibly eventually after each check-in). This causes me to be able to make quick changes without as much fear, because I have a fast way of checking for regressions. This allows me to then be less fearful of making changes, such as cleaning up my code-base… Oh wow – there’s a circle.

In fact, it is a positive feedback loop. Starting to make this change enables me to more easily make the change in the future. But once I’m moving along in this way, I start to be able to ship more frequently, because my fast verification reduces the cost of shipping. This means I could ship after three iterations, instead of twelve… or eventually every iteration. It means I can make smaller iterations, because the cost of my end-of-iteration process is going down… There are several feedback loops in process during any transition to a more agile way of doing things, as the agile approach finds more and more procedural obstacles in the organization.

But… and here’s the big but… if you start to do an agile process implementation and don’t start changing how you think about software, software delivery, how you write it, and how it’s designed, you’re going to run up against an internal, self-inflicted limitation. You can’t move fast unless you’re organized to accommodate moving fast. Your code base is part of your environment in this context. So starting to help developers subtly move in this direction, and increase the pace at which they transition the existing code-base into a more suitable shape for working efficiently is critical. This, as it turns out, involves our dear old O-O.

What are testability, O-O, and other best practices today? (the recipies)

There’s a wealth of info out there. These don’t just include software approaches but also team practices. Martin FowlerMisko Hevery,Kent Beck(Uncle) Bob C. MartinArlo Belshee, and a host of others I couldn’t name in this space provide lots of good text on these. These include dependency-injectioncontinuous code review (pair programming)team co-locationseparation of concerns. On the latter point, Aspect Oriented Programming is a nice approach which I see as another flavour of O-O, conceptually, in that it attempts to get at some of the same key problems. It is often mixed either with O-O, or with Inversion of Control containers. Fearless refactoringcontinuous integration, build and test automation (I’m a big fan of Maven, btw, since, for all its problems, it makes dependency explicit). Test Driven Development (and it’s cousin test-first development). Also, the use of Domain Specific Languages has become quite helpful in both mapping the business problem to technology, but also eliminating quality problems by defining the language of the solution differently. And of course, wrapping this development in a management method that helps feed the process and consume its results – such as Scrum, or the management elements of Extreme Programming.

These are a sampling of practices that affect how you organize, re-think, design, create, and evolve your software. They rely on the basic principles and premises of agile, but require, in implementation, the core elements that O-O was trying to solve. How can we manage complexity, address the business problem, write healthy code, and be served, not mastered, by our tools.

Prologue (the plugs)

I’m a big Misko Hevery fan these days (I can feel him cringing at that appellation). There’s a lot I’ve had to say to my clients on the subject of testable code, designing for testability, and tools and technologies, but Misko seems to have wonderfully summed up much of my discussion on the topic on his “Testability Explorer” blog. He explains issues like the problem with Gang-of-Four Singletons, why Dependency Injection encourages not only testable code, but it does so by separating concerns (wiring, initialization, and business logic), and all sorts of good stuff like that. It helps to read from the earlier materials first, because Misko does build on earlier postings so later ones may assume you have read the earlier ones and that you are following along. Notwithstanding, his posts are cogent, clear, and insightful and have helped to crystallize certain understandings that I’ve been formulating over my years of software development into much more precise notions, and he’s helped me learn how to articulate and explain such topics.

Misko has also recently published a code-reviewer’s guide to testable code. Sorely needed in my view. I also want to make a quick shout-out to his testability explorer tool, which is fabulous, and I’m working on a Maven 2 plugin to integrate it into site reports.

Also, I’ve built a Dependency Injection container (google-code project here and docs here) suitable for use on Java2 Micro Edition CLDC 1.1 platform, because I had to prove to a client that you could do this in a reflection-free embedded environment. It’s BSD licensed, so feel free to use it if you want.

Lastly, Mishkin Berteig and co. have a decent blog called Agile Advice (on which I occasionally also blog) which nicely examines the various process-related, cultural, organizational, and relational issues that working in this way brings up. My posts tend towards the technical on that blog, but occasionally otherwise as well.

→ 8 CommentsTags: Uncategorized

Testability Explorer Plugin for Hudson

posted on January 4th, 2009 ·

Many thanks to Reik Schatz who has written a  Testability Explorer plug-in for Hudson. Great job Reik! And many thanks for helping out. You can check out his experience about writing the plug-in here. To read about the plug-in visit the Hudson repository of plug-ins here. To find out more about the Testability Explorer read the OOPSLA paper here.

Happy Coding…

→ 1 CommentTags: Announcement

Interfacing with hard-to-test third-party code

posted on January 4th, 2009 ·

by Miško Hevery

Shahar asks an excellent question about how to deal with frameworks which we use in our projects, but which were not written with testability in mind.

Hi Misko, First I would like to thank you for the “Guide to Writing Testable Code”, which really helped me to think about better ways to organize my code and architecture. Trying to apply the guide to the code I’m working on, I came up with some difficulties. Our code is based on external frameworks and libraries. Being dependent on external frameworks makes it harder to write tests, since test setup is much more complex. It’s not just a single class we’re using, but rather a whole bunch of classes, base classes, definitions and configuration files. Can you provide some tips about using external libraries or frameworks, in a manner that will allow easy testing of the code?

– Thanks, Shahar

There are two different kind of situations you can get yourself into:

  1. Either your code calls a third-party library (such as you calling into LDAP authentication, or JDBC driver)
  2. Or a third party library calls you and forces you to implement an interface or extend a base class (such as when using servlets).

Unless these APIs are written with testability in mind, they will hamper your ability to write tests.

Calling Third-Party Libraries

I always try to separate myself from third party library with a Facade and an Adapter. Facade is an interface which has a simplified view of the third-party API. Let me give you an example. Have a look at javax.naming.ldap. It is a collection of several interfaces and classes, with a complex way in which you have to call them. If your code depends on this interface you will drown in mocking hell. Now I don’t know why the API is so complex, but I do know that my application only needs a fraction of these calls. I also know that many of these calls are configuration specific and outside of bootstrapping code these APIs are cluttering what I have to mock out.

I start from the other end. I ask myself this question. ‘What would an ideal API look like for my application?’ The key here is ‘my application’ An application which only needs to authenticate will have a very different ‘ideal API’ than an application which needs to manage the LDAP. Because we are focusing on our application the resulting API is significantly simplified. It is very possible that for most applications the ideal interface may be something along these lines.

interface Authenticator {
  boolean authenticate(String username,
                       String password);

As you can see this interface is a lot simpler to mock and work with than the original one as a result it is a lot more testable. In essence the ideal interfaces are what separates the testable world from the legacy world.

Once we have an ideal interface all we have to do is implement the adapter which bridges our ideal interface with the actual one. This adapter may be a pain to test, but at least the pain is in a single location.

The benefit of this is that:

  • We can easily implement an InMemoryAuthenticator for running our application in the QA environment.
  • If the third-party APIs change than those changes only affect our adapter code.
  • If we now have to authenticate against a Kerberos or Windows registry the implementation is straight forward.
  • We are less likely to introduce a usage bug since calling the ideal API is simpler than calling the original API.

Plugging into an Existing Framework

Let’s take servlets as an example of hard to test framework. Why are servlets hard to test?

  • Servlets require a no argument constructor which prevents us from using dependency injection. See how to think about the new operator.
  • Servlets pass around HttpServletRequest and HttpServletResponse which are very hard to instantiate or mock.

At a high level I use the same strategy of separating myself from the servlet APIs. I implement my actions in a separate class

class LoginPage {
  Authenticator authenticator;
  boolean success;
  String errorMessage;
  LoginPage(Authenticator authenticator) {
    this.authenticator = authenticator;

  String execute(Map<String, String> parameters,
                 String cookie) {
    // do some work
    success = ...;
    errorMessage = ...;

  String render(Writer writer) {
    if (success)
      return "redirect URL";

The code above is easy to test because:

  • It does not inherit from any base class.
  • Dependency injection allows us to inject mock authenticator (Unlike the no argument constructor in servlets).
  • The work phase is separated from the rendering phase. It is really hard to assert anything useful on the Writer but we can assert on the state of the LoginPage, such as success and errorMessage.
  • The input parameters to the LoginPage are very easy to instantiate. (Map<String, String>, String for cookie, or a StringWriter for the writer).

What we have achieved is that all of our application logic is in the LoginPage and all of the untestable mess is in the LoginServlet which acts like an adapter. We can than test the LoginPage in depth. The LoginSevlet is not so simple, and in most cases I just don’t bother testing it since there can only be wiring bug in that code. There should be no application logic in the LoginServlet since we have moved all of the application logic to LoginPage.

Let’s look at the adapter class:

class LoginServlet extends HttpServlet {
  Provider<LoginPage> loginPageProvider;

  // no arg constructor required by
  // Servlet Framework
  LoginServlet() {

  // Dependency injected constructor used for testing
  LoginServlet(Provider<LoginPage> loginPageProvider) {
    this.loginPageProvider = loginPageProvider;

  service(HttpServletRequest req,
          HttpServletResponse resp) {
    LoginPage page = loginPageProvider.get();
    String redirect = page.render(resp.getWriter())
    if (redirect != null)

Notice the use of two constructors. One fully dependency injected and the other no argument. If I write a test I will use the dependency injected constructor which will than allow me to mock out all of my dependencies.

Also notice that the no argument constructor is forcing me to use global state, which is very bad, but in the case of servlets I have no choice.  However, I make sure that only servlets access the global state and the rest of my application is unaware of this global variable and uses proper dependency injection techniques.

BTW there are many frameworks out there which sit on top of servlets and which provide you a very testable APIs. They all achieve this by separating you from the servlet implementation and from HttpServletRequest and HttpServletResponse. For example Waffle and WebWork

→ 7 CommentsTags: Advice · Testability

Happy New Year – 2009

posted on January 4th, 2009 ·

Just wanted to send a happy new year message to all of my readers who have made my blog such a success. In your personal life, I wish you a lot of time spent with your family and friends (away from the debugger). In your professional life  I wish you a lot of bug free, Singleton free  code and hope that 2009 will be the year when your code will encorporate testability and continuous build server as a basic requirement of your project. (And who knows, maybe you will give this test driven development thing a try)

Happy coding! – Miško Hevery


→ 2 CommentsTags: Uncategorized

Static Methods are Death to Testability

posted on December 15th, 2008 ·

by Miško Hevery

Recently many of you, after reading Guide to Testability, wrote to telling me there is nothing wrong with static methods. After all what can be easier to test than Math.abs()! And Math.abs() is static method! If abs() was on instance method, one would have to instantiate the object first, and that may prove to be a problem. (See how to think about the new operator, and class does real work)

The basic issue with static methods is they are procedural code. I have no idea how to unit-test procedural code. Unit-testing assumes that I can instantiate a piece of my application in isolation. During the instantiation I wire the dependencies with mocks/friendlies which replace the real dependencies. With procedural programing there is nothing to “wire” since there are no objects, the code and data are separate.

Here is another way of thinking about it. Unit-testing needs seams, seams is where we prevent the execution of normal code path and is how we achieve isolation of the class under test. seams work through polymorphism, we override/implement class/interface  and than wire the class under test differently in order to take control of the execution flow. With static methods there is nothing to override. Yes, static methods are easy to call, but if the static method calls another static method there is no way to overrider the called method dependency.

Lets do a mental exercise. Suppose your application has nothing but static methods. (Yes, code like that is possible to write, it is called procedural programming.) Now imagine the call graph of that application. If you try to execute a leaf method, you will have no issue setting up its state, and asserting all of the corner cases. The reason is that a leaf method makes no further calls. As you move further away from the leaves and closer to the root main() method it will be harder and harder to set up the state in your test and harder to assert things. Many things will become impossible to assert. Your tests will get progressively larger. Once you reach the main() method you  no longer have a unit-test (as your unit is the whole application) you now have a scenario test. Imagine that the application you are trying to test is a word processor. There is not much you can assert from the main method. 

We have already covered that global state is bad and how it makes your application hard to understand. If your application has no global state than all of the input for your static method must come from its arguments. Chances are very good that you can move the method as an instance method to one of the method’s arguments. (As in method(a,b) becomes a.method(b).) Once you move it you realized that that is where the method should have been to begin with. The use of static methods becomes even worse problem when the static methods start accessing the global state of the application. What about methods which take no arguments? Well, either methodX() returns a constant in which case there is nothing to test; it accesses global state, which is bad; or it is a factory.

Sometimes a static methods is a factory for other objects. This further exuberates the testing problem. In tests we rely on the fact that we can wire objects differently replacing important dependencies with mocks. Once a new operator is called we can not override the method with a sub-class. A caller of such a static factory is permanently bound to the concrete classes which the static factory method produced. In other words the damage of the static method is far beyond the static method itself. Butting object graph wiring and construction code into static method is extra bad, since object graph wiring is how we isolate things for testing.

“So leaf methods are ok to be static but other methods should not be?” I like to go a step further and simply say, static methods are not OK. The issue is that a methods starts off being a leaf and over time more and more code is added to them and they lose their positions as a leafs. It is way to easy to turn a leaf method into none-leaf method, the other way around is not so easy. Therefore a static leaf method is a slippery slope which is waiting to grow and become a problem. Static methods are procedural! In OO language stick to OO. And as far as Math.abs(-5) goes, I think Java got it wrong. I really want to write -5.abs(). Ruby got that one right.

→ 55 CommentsTags: OO · Rant · Testability

Clean Code Talks – Inheritance, Polymorphism, & Testing

posted on December 8th, 2008 ·

by Miško Hevery

Google Tech Talks
November 20, 2008


Is your code full of if statements? Switch statements? Do you have the same switch statement in various places? When you make changes do you find yourself making the same change to the same if/switch in several places? Did you ever forget one?

This talk will discuss approaches to using Object Oriented techniques to remove many of those conditionals. The result is cleaner, tighter, better designed code that’s easier to test, understand and maintain.



→ 22 CommentsTags: Uncategorized

Guide to Writing Testable Code

posted on November 24th, 2008 ·

It is with great pleasure that I have been able to finally open-source the Guide to Writing Testable Code.

I am including the first page here for you, but do come and check it out in detail.

To keep our code at Google in the best possible shape we provided our software engineers with these constant reminders. Now, we are happy to share them with the world.

Many thanks to these folks for inspiration and hours of hard work getting this guide done:

Flaw #1: Constructor does Real Work

Warning Signs

  • new keyword in a constructor or at field declaration
  • Static method calls in a constructor or at field declaration
  • Anything more than field assignment in constructors
  • Object not fully initialized after the constructor finishes (watch out for initialize methods)
  • Control flow (conditional or looping logic) in a constructor
  • Code does complex object graph construction inside a constructor rather than using a factory or builder
  • Adding or using an initialization block

Flaw #2: Digging into Collaborators

Warning Signs

  • Objects are passed in but never used directly (only used to get access to other objects)
  • Law of Demeter violation: method call chain walks an object graph with more than one dot (.)
  • Suspicious names: context, environment, principal, container, or manager

Flaw #3: Brittle Global State & Singletons

Warning Signs

  • Adding or using singletons
  • Adding or using static fields or static methods
  • Adding or using static initialization blocks
  • Adding or using registries
  • Adding or using service locators

Flaw #4: Class Does Too Much

Warning Signs

  • Summing up what the class does includes the word “and”
  • Class would be challenging for new team members to read and quickly “get it”
  • Class has fields that are only used in some methods
  • Class has static methods that only operate on parameters

→ 9 CommentsTags: Uncategorized