Skip to content

Telling the Story: The Pitfall of a Single Data Point

Let's say that you're sitting down to read a new book, and you come across the following:

The King's Knave Inn was but a short distance from the Alverston train depot, just outside the town proper. (excerpt from The Infernal Machine by John Lutz)

After reading this, a friend interrupts your reading and asks your thoughts on the book so far. What would you say?

Most likely, you'd respond that you need to read more, and it's still too early to decide if the book is good or not.

To honestly answer this question, you would need to read more of the book (ideally all of it) to get a full picture of the story.

When measuring an engineer's performance and effectiveness, why don't we take the same approach?

My experience has been that leaders look for one or more metrics to quantify a person. At the face of it, I understand why, as it's hard to compare people if there aren't numbers.

However, the mistake I see leaders make is what they're trying to measure. For example, do you measure the number of pull requests? What about the number of stories completed in a sprint? How about the number of bugs shipped to production? Something else entirely?

The problem is that even if you use all of the above (please don't do this), you're still not seeing the whole picture, but only bits and pieces. This would be like reading five chapters at random from a book and then giving an opinion.

The other problem with using metrics is that the measurement will cease to be effective as people will start gaming the system (see Goodhart's Law).

For example, if we measure effectiveness by the number of completed pull requests, then what stops someone from creating hundreds of single-line pull requests that don't accomplish anything?

On the other side, what about the engineer who reduced scope and time because they knew how to simplify the approach or came up with a more straightforward solution? This insight won't show up as a pull request or a completed story; however, this should be rewarded just the same.

To really determine how effective someone is, we need to look at things holistically, which can be done by examining how well someone does in these three areas:

  • Understanding the problem (e.g., why are we doing this?)
  • Understanding the system (e.g., how are we doing this?)
  • Understanding the people (e.g., whom are we doing this with?)

By looking into these areas, you will see what your team is good at and where they could use coaching, helping you be more effective. You might also realize that your team is doing things that aren't so obvious.

You can't write a report to generate these metrics. To understand this, you have to understand your team and how they work together. This involves paying attention, taking notes, and being engaged. Passive leaders will struggle if they use this approach.

Understanding the Problem (The Why)

To be successful, we first have to understand the problem that's being solved. Without this base knowledge, it's impossible to build the right solution or even ask the right question to the problem at hand.

How comfortable are they within the problem domain? Do they know certain terminologies, our customers, the users, workflows, and expected behaviors?

Besides quizzing, there may not be an obvious way to measure this; however, here's my approach.

First, you can look at the questions that are being asked. Are they surface level or are they deep? You can see these questions through chats and meetings, comments on the stories or pull requests, and interactions with others.

Second, look at the solution they came up with. Did they design it with domain knowledge in mind? For example, are things named correctly? Did their solution take care of the main workflows? What about the edge cases?

Third, how are they handling support issues? Being on support is a quick way of learning a problem domain and system. As such, I'm looking at how much help they need and how they communicate with others.

By using this approach, you can get a good sense of how knowledgeable someone is in the problem domain without quizzing them.

Understanding the System (The How)

There's always a push to deliver more things, and in order to do that, we have to understand the current system, its limitations, what's easy vs. what's hard, and from these constraints, determine the correct path to take.

In addition, once the system is live, we need to support it. If we don't know the moving parts, what it interacts with, and how it's used, we're going to have a bad time.

Like understanding the problem, we can measure system knowledge without quizzing them. In particular, I've found pull request comments and code reviews to be insightful on someone's knowledge of the system.

For example, do they call out that there's already something in the system that does this new piece of functionality? Do they suggest taking a simpler approach with what we have? Do they propose a different solution altogether because the system has a limitation? All of these are indicators of someone's system knowledge.

Another way to gauge system knowledge is by looking at how the person handles support requests. If you can understand the problem, find the cause, and create a fix, then by definition, you have to have a solid understanding of the system.

Understanding the People (The Who)

When it comes to the third part of being effective, we have to measure how they work with those around them. Most people think engineering is a solitary line of work, and that can be true when it comes to the development phase.

However, in reality, engineers work with others to design, develop, and iterate on a solution, and this can only happen when working with others. As such, building these relationships are paramount to being successful.

If you want to go fast, go alone. If you want to go far, go together.

Measuring team cohesion can be difficult (it could be its own post), however, we start simply be getting peer feedback on the person. We can also look at the communication between them and others through their comments, messages, or meetings.

Another way to measure this is through your company's recognition system. Whether it's an email or some other tool your company uses, you need to keep tabs on these recognitions, as you can use them as a talking point during 1:1s and review time.

Wrapping Up

So, how do we measure how effective someone is? We know that a single data point isn't sufficient and that if we limit ourselves to metrics, we can get a skewed sense of the person. To know, we have to take a holistic approach.

To accomplish this goal, I recommend measuring the following areas:

  • Understanding the problem (e.g., why are we doing this?)
  • Understanding the system (e.g., how are we doing this?)
  • Understanding the people (e.g., whom are we doing this with?)

In each of these areas, we can get a sense by observing their interactions they have, the questions they ask, the approaches they take, and how likely people want to work with the individual.