Conducting performance evaluations on individual software developers
can prove a daunting challenge for even the most experienced manager. Unless
you’re working in some sort of Utopian organization where you have perfect
processes and perfect requirements, measuring the performance of your
development staff is likely to be very subjective. A key stumbling block is the
fact that the success of a software developer is often dependent on many variables
outside of their control. Som
Image via Wikipedia
e of those variables include things like the
quality of the software requirements; the level of collaboration from the
business sponsor; the availability effective development environment; etc. Yet,
every sizable software development shop seems to both their star players and those
that seem to fall short.
So, how do you measure their performance? There is no single
answer and one of the first things you should do is stop by your friendly Human
Resources department for some advice. If you don’t have an HR group, do a
little research and reading
online. You’ll find that there is no shortage of
opinions on this matter, from folks that want to measure the number of lines of
code per hour to those who claim it can’t be adequately measured.
Here are a few of my own thoughts on the matter:
- Subjectivity is part of the equation – If you
were hoping for a purely objective way of measuring a developer’s performance,
forget it! Programming is both science and art, and it’s the artistic quality
that will always be subjective. That’s ok! If you’re a manager, you should
understand that agreement is not necessarily part of the equation. However, you
do need to be able to express what you like and don’t like about their work in
a way that will allow them to take corrective action.
- There are things you can measure – It’s not all
art! For example, if you set an expectation that developers will unit test
their code, that’s a pretty black & white measure. They either did it or
they didn’t. As to the adequacy and quality of their testing, well, that’s a
different story – subjectivity again. Did they complete any required
documentation (e.g., design docs)? Hard deliverables, such as documentation are
easy things to measure.
- Setting clear expectations is critical – If you
require your developers to do unit testing or prepare certain documentation,
then tell them! Better yet, put it down on paper and share your expectations
with the entire team. Make sure your team understands exactly what and how you
are going to be measuring their performance.
- Provide near real-time feedback and document –
When an employee performance well, tell them! When they don’t perform well,
tell them! Simple, huh? I suggest, particularly when correcting negative
performance, that you document things using e-mail or some other written
mechanism. If you have an individual with chronic performance issues, having
written documentation is essential if you every reach the point where you are
making a termination decision. I also can’t stress enough how important the “real-time
feedback” aspect is in this process. If you wait more than about 24 hours to
provide feedback, the effectiveness is greatly reduced.
- In the end, measuring whether a developer is
able to consistently deliver quality work on time is essential. The trick is how do you hold a developer
accountable for what is essentially an estimated level of effort? It’s an “estimate”
after all?! It’s a tough problem and the
key is to break things down to the point where most of the variables are in the
control of the developer. If you’re measuring a developer on completion of tasks
estimated to take more than about two or three days, you’re going to run into
problems. Generally, the longer the duration the more uncontrollable variables that
come into play – everything from dependencies on other developers to shifting
requirements. One of the things I like about most of the Agile development methodologies
is the way the team breaks down the work to be done in really small chunks,
typically measured in days or hours. If a developer says they are going to
finish implementing class XYX by tomorrow, that’s something I can measure them
on and I can reasonably expect the outcome to be in their control. True, there
are things that can still go wrong that isn’t their fault, but they are
generally easy to identify and understand (e.g., their workstation crashed).
If you’ve have some success stories or challenges you want
to share related to measuring software developer performance, I’d love to hear
from you! It’s a tough subject, with no shortage of opinions!