Overindexing on measurement of engineering output
One naive mistake I see a lot of people make is trying to optimise for output metric like jira story points. Even worse: try to measure very hard to measure metrics like lead and cycle time of projects.
How do I know this? Because I was one of these idiots.
For a few months, I tried my hardest to improve sprint velocity/ developer in my company. My rationale was that doing this will help motivate people and they will try to improve their scores.
Then one developer came and asked how it is fair that we are not counting the hours they spend interviewing candidates. If this is the most important metric to optimise then they should be allowed to add interviewing tasks and count points for those. Then another developer complained that they spend a lot of time reviewing PRs raised by other teams. How is it fair that they spend their time doing that and not earn any points? Their points/sprint metric suffer. I said sure, lets measure time spent on that too.
After a few weeks, instead of morale going up, we realised we were just arguing on how to actually measure this sprint velocity/ developer. Forget improving, measuring was the hardest problem. Then when QAs and designers started complaining that their output does not get measured. Why is that? Are they not important enough?
So we stopped this exercise.
A year later, someone came up with the proposal of measure lead type and cycle time. Management felt that teams were moving slower and measuring this metric would lead to more transparency on team output and push people to improve.
You might have already guessed the problem with this too.
It is fucking hard to measure these metrics. Most startups work in a chaotic manner. It is not the waterfall model. It is not manufacturing a vehicle that there are strict sequential steps.
Also, if you end up defining the steps too, what is stopping PMs from gaming this metric?
Oh, cycle time needs to be smaller? Let us push the release without proper QA.
Oh, lead time has gone up over time? Let us just not change the status of the project card on Asana and mark it as picked much later than when it is picked.
What gets measured gets managed - we all know this.
Reminds me of the time when hosting events and interviewing candidates used to be measured in the same company and people would get points in their performance reviews for doing these activities. When I joined the company there used to be one event after another. We got the best speakers. Everyone got treated with free pizza.
Then one day the leadership decided that we would be no more considering these two dimensions when it came to performance management. Since performance management scores directly impacted compensation and levelling, everyone soon stopped optimising for these activities. In the next few months the number of events went down drastically. Also TAs would have to fight now to get people on the interview panel.
People underestimate how incentives impact corporate behavior. Maybe they think they are above them. Or they don’t want to accept that all of us are the same: chasing the same shiny things in life.