Checked exceptions I love you, but you have to go

posted on September 16th, 2009 ·

Once upon a time Java created an experiment called checked-exceptions, you know you have to declare exceptions or catch them. Since that time no other language (I know of) has decided to copy this idea, but somehow the Java developers are in love with checked exceptions. Here, I am going to “try” to convince you that checked-exceptions, even thought look like a good idea at first glance, are actually not a good idea at all:

Empirical Evidence

Let’s start with an observation of your code base. Look through your code and tell me what percentage of catch blocks do rethrow or print error? My guess is that it is in high 90s. I would go as far as 98% of catch blocks are meaningless, since they just print an error or rethrow the exception which will later be printed as an error. The reason for this is very simple. Most exceptions such as FileNotFoundException, IOException, and so on are sign that we as developers have missed a corner case. The exceptions are used as away of informing us that we, as developers, have messed up. So if we did not have checked exceptions, the exception would be throw and the main method would print it and we would be done with it (optionally we would catch in main all exceptions and log them if we are a server).

Checked exceptions force me to write catch blocks which are meaningless: more code, harder to read, and higher chance that I will mess up the rethrow logic and eat the exception.

Lost in Noise

Now lets look at the 2-5% of the catch blocks which are not rethrow and real interesting logic happens there. Those interesting bits of useful and important information is lost in the noise, since my eye has been trained to skim over the catch blocks. I would much rather have code where a catch would indicate, pay, attention here something interesting happens here, rather than, it is just a rethrow. Now, if we did not have checked exceptions, you would write your code without catch, test your code (you do test right?) and realize that under these circumstances an exception is throw and deal with it. In such a case forgetting to write a catch block is no different than forgetting to write an else block of the if statement. We don’t have checked ifs and yet no one misses them, so why do we need to tell developers that FileNotFound can happen. What if the developer knows for a fact that it can not happen since he has just placed the file there, and so such an exception would mean that your filesystem has just disappeared and your application is not place to handle that.

Checked exception make me skim the catch as most are just rethrows, making it likely that I will miss something important.

Unreachable Code

I love to write tests first and implement as a consequence of tests. In such a situation you should always have 100% coverage since you are only writing what the tests are asking for. But you don’t! It is less than 100% because checked exceptions force you to write catch blocks which are impossible to execute. Check this code out:

bytesToString(byte[] bytes) {
  ByteArrayOutputStream out = new ByteArrayOutputStream();
  try {
    return out.toSring();
  } catch (IOException e) {
    // This can never happen!
    // Should I rethrow? Eat it? Print Error?

ByteArrayOutputStream will never throw IOException! You can look through its implementation and see that it is true! So why are you making me catch a phantom exception which can never happen and which I can not write a test for? As a result I cannot claim 100% coverage because of things outside my control.

Checked exceptions create dead code which will never  execute.

Closures Don’t Like You

Java does not have closures but it has visitor pattern. Let me explain with concrete example. I was creating a custom class loader and need to override load() method on MyClassLoader which throws ClassNotFoundException under some circumstances. I use ASM library which allows me to inspect Java bytecodes. The way ASM works is that it is a visitor pattern, I write visitors and as ASM parses the bytecodes it calls specific methods on my visitor implementation. One of my visitors as it is examining bytcodes decides that things are not right and needs to throw a ClassNotFondException which the class loader contract says it should throw. But now we have a problem. What we have on a stack is MyClassLoader -> ASMLibrary -> MyVisitor. MyVisitor wants to throw an exception which MyClassLoader expects but it can not since ClassNotFoundException is checked and ASMLibrary does not declare it (nor should it). So I have to throw RuntimeClassNotFoundException from MyVisitor which can pass through ASMLibrary which MyClassLoader can catch and rethrow as ClassNotFoundException.

Checked exception get in the way of functional programing.

Lost Fidelity

Suppose java.sql package would be implemented with useful exception such as SqlDuplicateKeyExceptions and SqlForeignKeyViolationException and so on (we can wish) and suppose these exceptions are checked (which they are). We say that the SQL package has high fidelity of exception since each exception is to a very specific problem. Now lets say we have the same set up as before where there is some other layer between us and the SQL package, that layer can either redeclare all of the exceptions, or more likely throw its own. Let’s look at an example, Hibernate is object-relational-database-mapper, which means it converts your SQL rows into java objects. So on the stack you have MyApplication -> Hibernate -> SQL. Here Hibernate is trying hard to hide the fact that you are talking to SQL so it throws HibernateExceptions instead of SQLExceptions. And here lies the problem. Your code knows that there is SQL under Hibernate and so it could have handled SqlDuplicateException in some useful way, such as showing an error to the user, but Hibernate was forced to catch the exception and rethrow it as generic HibernateException. We have gone from high fidelity SqlException to low fidelity HibernateException. An so MyApplication can not do anything. Now Hibernate could have throw HibernateDuplicateKeyException but that means that Hibernate now has the same exception hierarchy as SQL and we are duplicating effort and repeating ourselves.

Rethrowing checked exceptions causes you to lose fidelity and hence makes it less likely that you could do something useful with the exception later on.

You can’t do Anything Anyway

In most cases when exception is throw there is no recovery. We show a generic error to the user and log an exception so that we can file a bug and make sure that that exception will not happen again. Since 90+% of the exception are bugs in our code and all we do is log, why are we forced to rethrow it over and over again.

It is rare that anything useful can be done when checked exception happens, in most case we die with error, so make that the default behavior of my code with no additional typing.

How I deal with the code

Here is my strategy to deal with java:

  • Always catch all checked exceptions at source and rethrow them as LogRuntimeException.
    • My runtime un-checked exception which says I don’t care just log it.
    • Here I have lost Exception fidelity.
  • All of my methods do not declare any exceptions
  • As I discover that I need to deal with a specific exception I go back to the source where LogRuntimeException was thrown and I change it to <Specific>RuntimeException (This is rarer than you think)
    • I am restoring the exception fidelity only where needed.
  • Net effect is that when you come across a try-catch clause you better pay attention as interesting things are happening there.
    • Very few try-catch calluses, code is much easier to
    • Very close to 100% test coverage as there is no dead code in my catch blocks.

→ 20 CommentsTags: Uncategorized

It is not about writing tests, its about writing stories

posted on September 2nd, 2009 ·

I would like to make an analogy between building software and building a car. I know it is imperfect one, as one is about design and the other is about manufacturing, but indulge me, the lessons are very similar.

A piece of software is like a car. Lets say you would like to test a car, which you are in the process of designing, would you test is by driving it around and making modifications to it, or would you prove your design by testing each component separately? I think that testing all of the corner cases by driving the car around is very difficult, yes if the car drives you know that a lot of things must work (engine, transmission, electronics, etc), but if it does not work you have no idea where to look. However, there are some things which you will have very hard time reproducing in this end-to-end test. For example, it will be very hard for you to see if the car will be able to start in the extreme cold of the north pole, or if the engine will not overheat going full throttle up a sand dune in Sahara. I propose we take the engine out and simulate the load on it in a laboratory.

We call driving car around an end-to-end test and testing the engine in isolation a unit-test. With unit tests it is much easier to simulate failures and corner cases in a much more controlled environment. We need both tests, but I feel that most developers can only imagine the end-to-end tests.

But lets see how we could use the tests to design a transmission. But first, little terminology change, lets not call them test, but instead call them stories. They are stories because that is what they tell you about your design. My first story is that:

  • the transmission should allow the output shaft to be locked, move in same direction (D) as the input shaft, move in opposite (R) or move independently (N)

Given such a story I could easily create a test which would prove that the above story is true for any design submitted to me. What I would most likely get is a transmission which would only have a single gear in each direction. So lets write another story

  • the transmission should allow the ratio between input and output shaft to be [-1, 0, 1, 2, 3, 4]

Again I can write a test for such a transmission but i have not specified how the forward gear should be chosen, so such a transmission would most likely be permanently stuck in 1st gear and limit my speed, it will also over-rev the engine.

  • the transmission should start in 1st and than switch to higher gear before the engine reaches maximum revolutions.

This is better, but my transmission would most likely rev the engine to maximum before it would switch, and once it would switch to higher gear and I would slow down, it would not down-shift.

  • the transmission should down shift whenever the engine RPM fall bellow 1000 RPMs

OK, now it is starting to drive like a car, but still the limits for shifting really are 1000-6000 RPMs which is not very fuel efficient way to drive your car.

  • the transmission should up-shift whenever the estimated fuel consumption at a higher gear ration is better than the current one.

So now our engine will not rev any more but it will be a lazy car since once the transmission is in the fuel efficient mode it will not want to down-shift

  • the transmission should down-shift whenever the gas pedal is depressed more than 50% and the RPM is lower than the engine’s peak output RPM.

I am not a transmission designer, but I think this is a decent start.

Notice how I focused on the end result of the transmission rather than on testing specific internals of it. The transmission designer would have a lot of levy in choosing how it worked internally, Once we would have something and we would test it in the real world we could augment these list of stories with additional stories as we discovered additional properties which we would like the transmission to posses.

If we would decide to change the internal design of the transmission for whatever reason we would have these stories as guides to make sure that we did not forget about anything. The stories represent assumptions which need to be true at all times. Over the lifetime of the component we can collect hundreds of stories which represent equal number of assumption which is built into the system.

Now imagine that a new designer comes on board and makes a design change which he believes will improve the responsiveness of the transmission, he can do so because the existing stories are not restrictive in how, only it what the outcome should be. The stories save the designer from breaking an existing assumption which was already designed into the transmission.

Now lets contrast this with how we would test the transmission if it would already be build.

  • test to make sure all of the gears work
  • test to make sure that the engine is not allowed to over-rev

It is hard now to think about what other tests to write, since we are not using the tests to drive the design. Now, lets say that someone now insist that we get 100% coverage, we open the transmission up and we see all kinds of logic, and rules and we don’t know why since we were not part of the design so we write a test

  • at 3000 RPM input shaft, apply 100% throttle and assert that the transmission goes to 2nd gear.

Tests like that are not very useful when you want to change the design, since you are likely to break the test, without fully understanding why the test was testing that specific conditions, it is hard to know if anything was broken if the tests is red.. That is because the tests does not tell a story any more, it only asserts the current design. It is likely that such a test will be in the way when you will try to do design changes. The point I am trying to make is that there is huge difference between writing tests before or after. When we write tests before we are:

  • creating a story which is forcing a particular design decision.
  • tests are a collection of assumptions which needs to be true at all times.

when we write tests after the fact we:

  • miss a lot of reasons why things are done in particular way even if we have 100% coverage
  • test are often brittle because they are tied to particulars of the current implementation
  • tests are just snapshots and don’t tell a story of why the component does something, only that it does.

For this reason there are huge differences in quality when writing assumptions as stories before (which force design to emerge) or writing tests after which take a snapshot of a given design.

→ 10 CommentsTags: Uncategorized

Sharing My Slide Deck from RTAC

posted on August 21st, 2009 ·

Just Wanted to share the latest slide deck with you, which I was presenting at RIM Test Automation Conference. I know that without the sound to go with it there is limited value, but I hope you get at least something out of it.

Psychology of Testing

→ 7 CommentsTags: Uncategorized

Super Fast JS Testing

posted on August 12th, 2009 ·

by Shyam Seshadri

Before I jump into how exactly you can perform super fast and easy JS testing, let me give you some background on the problem.

Javascript is a finicky language (Some people even hesitate to call it a language). And it can easily grow and become a horrible and complicated beast, incapable of being tamed once let loose. And testing it is a nightmare. Once you have decided on a framework (of which there are a dime a dozen), you then have to set it up to run just right. You need to set it up to actually run your tests. Then you have to figure out how to run it in a continuous integration environment. Maybe even run it in headless mode. And everyone solves it in their own ways.

But the biggest problem I have with most of these frameworks is that executing the tests usually requires a context switch. By that, I mean to run a JSUnit test, you end up usually having to open the browser, browse to a particular url or html page which then runs the test. Then you have to look at the results there, and then come back to your editor to either proceed further or fix your tests. Works, but really slows down development.

In java, all it takes is to click the run button in your IDE to run your tests. You get instant feedback, a green / red bar and details on which tests passed and failed and at what line. No context switch, you can get it to run at every save, and proceed on your merry way. Till now, this was not possible with Javascript.

But now, we have JS Test Driver. My colleagues Jeremie and Misko ended up running into some of the issues I outlined above, and decided that going along with the flow was simply unacceptable. So they created a JS Testing framework which solves these very things. You can capture any browser on any machine, and when you tell it to run tests, it will go ahead and execute them on all these browsers and return you the results in your client. And its blazing fast. I am talking milliseconds to run 100 odd tests. And you can tell it to rerun your tests at each save. All within the comforts of your IDE. And over the last three weeks, I have been working on the eclipse plugin for JS Test Driver, and its now at the point where its in a decent working condition.

The plugin in action

The plugin allows you to, from within Eclipse, start the JS Test Driver server, capture some browsers, and then run your tests. You get pretty icons telling you what browsers were captured, the state of the server, the state of the tests. It allows you to filter and show only failures, rerun your last launch configuration, even setup the paths to your browsers so you can launch it all from the safety of eclipse. And as you can see, its super fast. Some 100 odd tests in less than 10 ms. If thats not fast, I don’t know what is.

For more details on JS Test Driver, visit its Google Code website and see how you can use it in your next project and even integrate it into a continuous integration. Misko talks a little bit more about the motivations behind writing it on his Yet Another JS Testing Framework post. To try out the plugin for yourselves, go add the following update site to eclipse:

For all you IntelliJ fanatics, there is something similar in the works.

→ 10 CommentsTags: Uncategorized

How to think about OO

posted on July 31st, 2009 ·

Everyone seems  to think that they are writing OO after all they are using OO languages such as Java, Python or Ruby. But if you exam the code it is often procedural in nature.

Static Methods

Static methods are procedural in nature and they have no place in OO world. I can already hear the screams, so let me explain why, but first we need to agree that global variables and state is evil. If you agree with previous statement than for a static method to do something interesting it needs to have some arguments, otherwise it will always return a constant. Call to a staticMethod() must always return the same thing, if there is no global state. (Time and random, has global state, so that does not count and object instantiation may have different instance but the object graph will be wired the same way.)

This means that for a static method to do something interesting it needs to have arguments. But in that case I will argue that the method simply belongs on one of its arguments. Example: Math.abs(-3) should really be -3.abs(). Now that does not imply that -3 needs to be object, only that the compiler needs to do the magic on my behalf, which BTW, Ruby got right. If you have multiple arguments you should choose the argument with which method interacts the most.

But most justifications for static methods argue that they are “utility methods”. Let’s say that you want to have toCamelCase() method to convert string “my_workspace” to “myWorkspace”. Most developers will solve this as StringUtil.toCamelCase(“my_workspace”). But, again, I am going to argue that the method simply belongs to the String class and should be “my_workspace”.toCamelCase(). But we can’t extend the String class in Java, so we are stuck, but in many other OO languages you can add methods to existing classes.

In the end I am sometimes (handful of times per year) forced to write static methods due to limitation of the language. But that is a rare event since static methods are death to testability. What I do find, is that in most projects static methods are rampant.

Instance Methods

So you got rid of all of your static methods but your codes still is procedural. OO says that code and data live together. So when one looks at code one can judge how OO it is without understanding what the code does, simply by looking at the relationship of data and code.

class Database {
  // some fields declared here
  boolean isDirty(Cache cache, Object obj) {
    for (Object cachedObj : cache.getObjects) {
      if (cachedObj.equals(obj))
        return false;
    return true;

The problem here is the method may as well be static! It is in the wrong place, and you can tell this because it does not interact with any of the data in the Database, instead it interacts with the data in cache which it fetches by calling the getObjects() method. My guess is that this method belongs to one of its arguments most likely Cache. If you move it to Cache you well notice that the Cache will no longer need the getObjects() method since the for loop can access the internal state of the Cache directly. Hey, we simplified the code (moved one method, deleted one method) and we have made Demeter happy.

The funny thing about the getter methods is that it usually means that the code where the data is processed is outside of the class which has the data. In other words the code and data are not together.

class Authenticator {
  Ldap ldap;
  Cookie login(User user) {
    if (user.isSuperUser()) {
      if ( ldap.auth(user.getUser(),
             user.getPassword()) )
        return new Cookie(user.getActingAsUser());
    } else (user.isAgent) {
        return new Cookie(user.getActingAsUser());
    } else {
      if ( ldap.auth(user.getUser(),
             user.getPassword()) )
        return new Cookie(user.getUser());
    return null;

Now I don’t know if this code is well written or not, but I do know that the login() method has a very high affinity to user. It interacts with the user a lot more than it interacts with its own state. Except it does not interact with user, it uses it as a dumb storage for data. Again, code lives with data is being violated. I believe that the method should be on the object with which it interacts the most, in this case on User. So lets have a look:

class User {
  String user;
  String password;
  boolean isAgent;
  boolean isSuperUser;
  String actingAsUser;

  Cookie login(Ldap ldap) {
    if (isSuperUser) {
      if ( ldap.auth(user, password) )
        return new Cookie(actingAsUser);
    } else (isAgent) {
        return new Cookie(actingAsUser);
    } else {
      if ( ldap.auth(user, password) )
        return new Cookie(user);
    return null;

Ok we are making progress, notice how the need for all of the getters has disappeared, (and in this simplified example the need for the Authenticator class disappears) but there is still something wrong. The ifs branch on internal state of the object. My guess is that this code-base is riddled with if (user.isSuperUser()). The issue is that if you add a new flag you have to remember to change all of the ifs which are dispersed all over the code-base. Whenever I see If or switch on a flag I can almost always know that polymorphism is in order.

class User {
  String user;
  String password;

  Cookie login(Ldap ldap) {
    if ( ldap.auth(user, password) )
      return new Cookie(user);
    return null;

class SuperUser extends User {
  String actingAsUser;

  Cookie login(Ldap ldap) {
    if ( ldap.auth(user, password) )
      return new Cookie(actingAsUser);
    return null;

class AgentUser extends User {
  String actingAsUser;

  Cookie login(Ldap ldap) {
    return new Cookie(actingAsUser);

Now that we took advantage of polymorphism, each different kind of user knows how to log in and we can easily add new kind of user type to the system. Also notice how the user no longer has all of the flag fields which were controlling the ifs to give the user different behavior. The ifs and flags have disappeared.

Now this begs the question: should the User know about the Ldap? There are actually two questions in there. 1) should User have a field reference to Ldap? and 2) should User have compile time dependency on Ldap?

Should User have a field reference to Ldap? The answer is no, because you may want to serialize the user to database but you don’t want to serialize the Ldap. See here.

Should User have compile time dependency on Ldap? This is more complicated, but in general the answer depends on wether or not you are planning on reusing the User on a different project, since compile time dependencies are transitive in strongly typed languages. My experience is that everyone always writes code that one day they will reuse it, but that day never comes, and when it does, usually the code is entangled in other ways anyway, so code reuse after the fact just does not happen. (developing a library is different since code reuse is an explicit goal.) My point is that a lot of people pay the price of “what if” but never get any benefit out of it. Therefore don’t worry abut it and make the User depend on Ldap.

→ 45 CommentsTags: Uncategorized

Software Testing Categorization

posted on July 14th, 2009 ·

You hear people talking about small/medium/large/unit/integration/functional/scenario tests but do most of us really know what is meant by that? Here is how I think about tests.


Lets start with unit test. The best definition I can find is that it is a test which runs super-fast (under 1 ms) and when it fails you don’t need debugger to figure out what is wrong. Now this has some implications. Under 1 ms means that your test cannot do any I/O. The reason this is important is that you want to run ALL (thousands) of your unit-tests every time you modify anything, preferably on each save. My patience is two seconds max. In two seconds I want to make sure that all of my unit tests ran and nothing broke. This is a great world to be in, since if tests go red you just hit Ctrl-Z few times to undo what you have done and try again. The immediate feedback is addictive. Not needing a debugger implies that the test is localized (hence the word unit, as in single class).

The purpose of the unit-test is to check the conditional logic in your code, your ‘ifs’ and ‘loops’. This is where the majority of your bugs come from (see theory of bugs). Which is why if you do no other testing, unit tests are the best bang for your buck! Unit tests, also make sure that you have testable code. If you have unit-testable code than all other testing levels will be testable as well.

A is what I would consider great unit test example from Testability Explorer. Notice how each test tells a story. It is not testMethodA, testMethodB, etc, rather each test is a scenario. Notice how at the beginning the test are normal operations you would expect but as you get to the bottom of the file the test become little stranger. It is because those are weird corner cases which I have discovered later. Now the funny thing about is that I had to rewrite this class three times. Since I could not get it to work under all of the test cases. One of the test was always failing, until I realized that my algorithm was fundamentally flawed. By this time I had most of the project working and this is a key class for byte-code analysis process. How would you feel about ripping out something so fundamental out of your system and rewriting it from scratch? It took me two days to rewrite it until all of my test passed again. After the rewrite the overall application still worked. This is where you have an AHa! moment, when you realize just how amazing unit-tests are.

Does each class need a unit test? A qualified no. Many classes get tested indirectly when testing something else. Usually simple value objects do not have tests of their own. But don’t confuse not having tests and not having test coverage. All classes/methods should have test coverage. If you TDD, than this is automatic.


So you proved that each class works individually, but how do you know that they work together? For this we need to wire related classes together just as they would be in production and exercise some basic execution paths through it. The question here we are trying to answer is not if the ‘ifs’ and ‘loops’ work, (we have already answered that,) but whether the interfaces between classes abide by their contracts. Great example of functional test is Notice how the input of each test is an inner class in the test file and the output is To get the output several classes have to collaborate together to: parse byte-codes, analyze code paths, and compute costs until the final cost numbers are asserted.

Many of the classes are tested twice. Once directly throughout unit-test as described above, and once indirectly through the functional-tests. If you would remove the unit tests I would still have high confidence that the functional tests would catch most changes which would break things, but I would have no idea where to go to look for a fix, since the mistake can be in any class involved in the execution path. The no debugger needed rule is broken here. When a functional test fails, (and there are no unit tests failing) I am forced to take out my debugger. When I find the problem, I add a unit test retroactively to my unit test to 1) prove to myself that I understand the bug and 2) prevent this bug from happening again. The retroactive unit test is the reason why the unit tests at the end of file are “strange” for a lack of a better world. They are things which I did not think of when i wrote the unit-test, but discovered when I wrote functional tests, and through lot of hours behind debugger track down to class as the culprit.

Now computing metrics is just a small part of what testability explorer does, (it also does reports, and suggestions) but those are not tested in this functional test (there are other functional tests for that). You can think of functional-tests as a set of related classes which form a cohesive functional unit for the overall application. Here are some of the functional areas in testability explorer: java byte-code parsing, java source parsing, c++ parsing, cost analysis, 3 different kinds of reports, and suggestion engine. All of these have unique set of related classes which work together and need to be tested together, but for the most part are independent.


We have proved that: ‘ifs’ and ‘loops’ work; and that the contracts are compatible, what else can we test? There is still one class of mistake we can make. You can wire the whole thing wrong. For example, passing in null instead of report, not configuring the location of the jar file for parsing, and so on. These are not logical bugs, but wiring bugs. Luckily, wiring bugs have this nice property that they fail consistently and usually spectacularly with an exception. Here is an example of end-to-end test: Notice how these tests exercises the whole application, and do not assert much. What is there to assert? We have already proven that everything works, we just want to make sure that it is wired properly.

→ 14 CommentsTags: Uncategorized

Computer Engineer vs. Computer Scientist

posted on July 11th, 2009 ·

Which one are you? I am an engineer. But maybe we should first define the differences between the two. Engineer cares about how the system is put together whereas CS cares about how it works. Do you care about the technology or the algorithm?

The Interview

Since I am on engineer, these are the kind of questions I ask in an interview:

  • Imagine you are an evil developer, how do you make code hard to test?
  • How would you store a math expression tree, such that you would have an evaluate() method and a toString() method which would print the tree in an infix notation placing the parenthesis only when necessary.

On the other hand a CS asks these “algorithmic” questions:

  • Write a merge sort method which merges ‘n’ files into one.
  • Write a method to reverse a string in place.

The solution to engineer’s question is a set of classes interacting to define a system. Whereas a solution to a CS question is a single method doing something complex. Now, I don’t know about you but when was the last time when your application consisted of a single method. When was the last time when your system broke down because someone did not know how to write one of these algorithmic methods? When was the last time when your system broke down because the interaction between classes has gone too complex? I believe that most people are asking the wrong questions in the interview.


Now there is a time and place when an algorithm is exactly what the doctor ordered. Video/voice codecs, image compression, revision control, database optimization, and so on. But notice that all of these are nothing without some “application” around it. Video codec is nothing without YouTube around it, and DB engine is nothing without countless utilities, and database drivers which go with it. Chances are, when you are building an application, you will fail because  your class interaction will go out of hand, not because you did not know how to solve some algorithmic problem. Libraries have already been written for the most (80%) of the algorithmic problems out there. So why do you insist that an interview candidate knows how to sort integers. Is that what he/she will be doing all day long? While building your web application?

Engineer vs CS

The thing is you need both, and preferably in one person. It is also true that no one is 100% engineer or 100% CS, there is a blend in all of us. Algorithms require raw brain power, whereas engineering requires prior knowledge (best practices) which you cannot get by thinking about the problem harder. You can only get “engineering” knowledge by learning from others. I have seen too many people focusing too much on brain power alone. We hire young kids and we let them loose on a code-base, and than we wonder why we can’t change anything because everything is tightly coupled. Hint, it is not because they are stupid.

The Reality

The truth is in the ten years or so I have been out of school I have been fighting bad code every step of the way. I can’t remember when was the last time I had to solve an algorithmic problem. The most complex/algorithmic intensive code I have written was Testability Explorer, which is a byte code analysis engine which tries to determine how testable your code is. I have written it in my free time, and the thing which enabled me to write such a complex “algorithm” were unit tests, and lots of them! I had to rely on “best-engineering-practices” to build something which is “algorithm intensive”. In other words it is my engineering background not my CS which made it possible. I think CS, is required but insufficient.

Going Forward

I would love for the industry to reflect on itself and realize that its problems is not lack of CS majors, or a the lack of bright people, but a lack of good old fashion engineers, thetn maybe we may start asking a lot more questions, to which the answer is not a single method, but a collection of classes interacting together. It took a handful of scientist to come up with an atom bomb, but it took countless of engineers to work out everything else, and software is no different. You need a handful of CS, and an army of engineers.

→ 39 CommentsTags: Uncategorized

Why are we embarrassed to admit that we don’t know how to write tests?

posted on July 7th, 2009 ·

Take your average developer and ask “do you know language/technology X?” None of us will feel any shame in admitting that we do not know X. After all there are so many languages, frameworks and technologies, how could you know them all? But what if X is writing testable code? Somehow we have trouble answering the question “do you know how to write tests?” Everyone says yes, whether or not  we actually know it. It is as if there is some shame in admitting that you don’t know how to write tests.

Now I am not suggesting that people knowingly lie here, it is just that they think there is nothing to it. We think: I know how to write code, I think my code is pretty good, therefore my code is testable!

I personally think that we would do a lot better if we would recognize testability as a skill in its own right. And as such skills are not innate and take years of practice to develop. We could than treat it as any other skill and freely admit that we don’t know it. We could than do something about it. We could offer classes, or other materials to grow our developers, but instead we treat it like breathing. We think that any developer can write testable code.

It took me two years of writing tests first, where I had as much tests as production code, before I started to understand what is the difference between testable and hard to test code. Ask yourself, how long have you been writing tests? What percentage of the code you write is tests?

Here is a question which you can ask to prove my point: “How do you write hard to test code?” I like to ask this question in interviews and most of the time I get silence. Sometimes I get people to say, make things private. Well if visibility is your only problem, I have a RegExp for you which will solve all of your problems. The truth is a lot more complicated, the code is hard to test doe to its structure, not doe to its naming conventions  or visibility. Do you know the answer?

We all start at the same place. When I first heard about testing I immediately thought about writing a framework which will pretend to be a user so that I can put the app through its paces. It is only natural to thing this way. This kind of tests are called end-to-end-tests (or scenario or large tests), and they should be the last kind of tests which you write not the first thing you think of. End-to-end-tests are great for locating wiring bugs but are pretty bad at locating logical bugs. And most of your mistakes are in logical bugs, those are the hard ones to find. I find it a bit amusing that to fight buggy code we write even more complex framework which will pretends to be the user, so now we have even more code to test.

Everyone is in search of some magic test framework, technology, the know-how, which will solve the testing woes. Well I have news for you: there is no such thing. The secret in tests is in writing testable code, not in knowing some magic on testing side. And it certainly is not in some company which will sell you some test automation framework. Let me make this super clear: The secret in testing is in writing testable-code! You need to go after your developers not your test-organization.

Now lets think about this. Most organizations have developers which write code and than a test organization to test it. So let me make sure I understand. There is a group of people which write untestable code and a group which desperately tries to put tests around the untestable code. (Oh and test-group is not allowed to change the production code.) The developers are where the mistakes are made, and testers are the ones who feel the pain. Do you think that the developers have any incentive to change their behavior if they don’t feel the pain of their mistakes? Can the test-organization be effective if they can’t change the production code?

It is so easy to hide behind a “framework” which needs to be built/bought and things will be better. But the root cause is the untestable code, and until we learn to admit that we don’t know how to write testable code, nothing is going to change…

→ 17 CommentsTags: Uncategorized

ActiveRecord is hard to test

posted on June 29th, 2009 ·

Reader asks:

have a question regarding testing of Active Record style domain models

The problem is: client code of these objects will instantiate them directly, modify some attributes and then invoke a save(), or insert() method directly on the object. Writing a unit test that doesnt talk to the database is difficult since the save/insert methods are inherited from a super class which handles the persistence using an object responsible for database access.

As I see it, the problem is that Active Record stlye domains cross the boundary between what you desribed as Newable/Injectable. The “model” part is newable, but the data access part is injectable (ideally).

The challenge I’ve had testing is finding a suitable point at which to mock (or stub) the database access. I thought of replacing the new CustomerModel() (for example) with a call to a factory that would be provided using dependency injection, such as factory.createCustomerModel() but this feels like a bit of hack, and indeed you’ve warned against this directly in some of your posts.

The other option would be to define something static on the model super class to allow the default database object to be replaced before the test runs the client code. Again though, this doesnt “feel” that right to me.

What would you suggest?

I am assuming that you have read: The issue here is that the Record should be newable (, and as such it should have no reference to service objects. In other words the save method needs to move some place else. A good place for it is to move to is a repository. I really like the Hibernate model as it makes the repository a “watcher” which watches your newable objects and than saves them automatically when the transaction is up.

Here is an example:

CarRepository carRepo = ... inject me...;
Car car = new Car();

No problems here. In your test just inject an InMemoryCarRepository. The cool thing is that most of your business code should only deal with car. Only very few places need to know about CarRepository. Here is something else which is cool.

CarRepository carRepo = ... inject me...;
Car car = carRepo.getById(123);

enter business logic...
exit business logic...


Since CaRepository knows about all of the Cars it has given out calling carRepo.commit() automatically saves all of the cars. Since most of your app modifies the entities most of it does not even need to know about CarRepository. Only classes which need to create or delete a car need to have a reference to a CarRepository.

From testing point of view this is great since most of my unit-test will be testing code from “enter business logic…” until “exit business logic…”, which means most of my tests will not even deal with CarRepository but only have to deal with a Car.

→ 1 CommentTags: Uncategorized

What Pair-Programing is Not

posted on June 12th, 2009 ·

People often ask how can I justify two people when one will do. Will this not, just double my cost? To answer this question I think it is important to discuss what pair-programing is not.

How much of your time do you spend writing code? If you answer 100% than you are lying! Between checking your emails, going to meetings, talking to people, mentoring others, finding bugs and thinking about fixing them, coding is something we do surprisingly little of. I only do coding about 10% of time, but i am probably at an extrema, so lets say you code 25% of time. Even when you are coding, you spend a lot of time testing what you have just coded, and realizing what you have just done does not work. Your end product may be 100 lines of code, but it may take you weeks to get there.

Out of 25% of coding how much do I pair? Not enough, but lets say 25% of the time. Overall you are only talking about little over 10% of my overall time I am pairing which you can say that I only cost 10% more (if you can apply accounting logic to coding). But what does this extra 10% of “cost” buy me?

At most places I have been at when a new developer shows up, they hand him a computer and say, “OK, get up-to speed and build this feature X.” Then most places do not expect anything out of the developer for few months while he “gets-up-to-speed.” Few months? I expect my developers to check in a feature the first week! But you can’t just demand it you have to approach it differently.

Let me tell you about Arthur, the last developer which joined us for a month, and how productive he was. Arthur showed up on Monday at 9am. I sat down with him behind his computer and said, “Let’s get you up-to-speed.” I made sure that he was driving, and I was talking and we started installing all of the software which he needed to develop code. The compilers, the database the IDE, checking out the code from the repository and running it. The key here is that Arthur was doing all of the work and I was just telling him what to do. By noon, he was up and running, thanks to a very intensive hand-holding from me. We went to lunch and talked about the application, not in general terms but in specific terms which Arthur could translate to what he did few minutes ego.

After lunch, I said let’s go and fix a bug. I picked a simple bug and for the most part again i was doing most of the talking but made sure that Arthur had the keyboard and understood what, and why he was doing things. By 3 pm we had a fix. We ran all of the tests, and submitted the code. We watched the build to make sure we did not break anything.

This is pairing, and it is exhausting! By 3 pm I was beat and went home, but Arthur stayed to play some more with the code. Turns out he fixed another bug on his own before he went home. Not bad for first day. Now I could have fixed the bugs myself in 1 hour flat both of them, but then Arthur would not learn anything. I sacrificed my productivity to make Arthur productive in a single day. If I did not it would take Arthur weeks before he would figure out how to set everything up how things worked and enough courage to fix a bug. Yet that is exactly what most companies do. Think about the confidence Arthur had on day one working with us. He was up and ready and he fixed two bugs on day one. WOW!

More importantly is what we did not do. We did not get distracted by checking email. How can you, you have a single monitor, and that means that Arthur would read my email. Arthur did not waste time looking for documentation which is poor or does not exist, he was spoon fed the knowledge in the most efficient way, and he did it all by himself, i was just talking, he was typing. It is amazing how much more focused you are when you have two people working on something.

Next day, we paired some more, but this time we started to build a new feature. Again I was taking Arthur through the implementation making sure that he did all of the work. By the end of the week Arthur has implemented several features and the amount of pairing we did has dropped of significantly, but instead Arthur started pairing with other engineers which had the know how which he needed to get the features done.

At the simplest level pairing is helping people one-on-one, not through emails, let me show you behind a keyboard, instead of telling you in abstract. The amount of pairing and keeping each other on track depends on many things, but you will find that more you pair the more you will learn from each other and the better the code becomes.

But Arthur was a new engineer coming up-to speed, what about existing engineers? Well let me tell you story of Pepper the senior tech lead on a project. Pepper came to me and said, Misko, I don’t like the way we deal with X in our app, what could we do to make it better. Now here is a case where Pepper has all of the know how of his app, and I know next to nothing. So we sat behind the code and I started writing what I would imagine an ideal app would look like, knowing very little about the code. Pepper had to explain why my view was wrong or too simplistic and in the process we would learn from each other and slowly converged on an ideal design, which was neither his nor my idea wholly but a collaboration of our ideas. We started refactoring, sometimes I would drive and sometimes he would. We would take turns painting ourselves in the corner and then backing out. Many times Pepper would work without me, as the refactoring was tedious, but straight forward and two of us were not needed. All together it took us 2 weeks to finish this refactoring. Pepper did the lions share of the work, but we spent about 3 full days behind a single computer together. That is what is pairing.

If you would ask Pepper, he would tell you that pairing helped him focus. He would say that without the pair he would not have had many of the ideas which we ended up having. Most importantly there are now two people which really understand the details of the design. When I or Pepper will pair with someone else, we will share these new ideas with the new developers. As team pairs with each other ideas and knowledge multiply and flow making sure that the code-base remains consistent no matter which place you look. It is as if a single person has written it. This is a great thing which helps with maintenance.

Pairing does not mean that two people always do everything together and checking each others work, that would be boring and a waste of time. Pairing means that people bonce ideas from each other and have discussions, not in abstract, but behind a keyboard, keep each other focused and on track. Result of which is code which is better than the sum of its parts. Pairing is about sharing the knowledge with your team mates so that everyone is on the same page. We will go further if we make sure that all of us pull in the same direction, something which is rare in the industry at large.

→ 37 CommentsTags: Uncategorized