Yes, but did it work?
There’s a subreddit called “ATBGE” which stands for “Awful Taste… But Great Execution!”
What lurks within is a lasting testament to the possibility of perfectly implementing a thing of no value. Or worse.
Worse such as when a design, despite flawless execution, actively goes against the purpose of the thing to which it’s being applied. Take, for example, this distinctly unappetising cockroach latte.
Fair warning: you may not want to browse the ATBGE subreddit at work, because the tawdry tends to go hand in hand with the bawdy.
But here’s another example that’s mostly offensive aesthetically: Cobra Cowboy Boots.
It’s not like snakeskin boots aren’t already a bold choice, but sure, let’s add exaggerated poulaines made out of the heads of those snakes. What could go wrong?
Just think about the human ingenuity and skill that went into their creation. I bet there isn’t a stitch out of place.
And yet, on the other hand, yuck.
So, following on from my last post about the two different kinds of Approval in Change and Release Management, this week I’m going to reflect on the two different types of Evaluation.
The thing is, when we look at changes we’ve implemented, we naturally tend to concentrate on the success of the execution. After all, we do make a big deal about reducing the risk of the execution; that’s the primary purpose of the weekly Go CAB. So, we tend to stay in the same mindset when we evaluate the success of planned changes.
Could it have gone wrong? Yes. Did it go wrong? No. Therefore, success.
And that is indeed one measure of success; Did it go wrong when we did it?
What we can learn from measuring this mostly helps improve future releases, or identify problem hotspots for releases that need to be addressed. We do need to do those things, and we have an “Adverse Releases” section in our Release policy to address that need.
But tracking and managing Adverse Releases is probably of more use to the Change and Release process than it is to our Service Owners… because it has nothing to say on the value of making those changes in the first place.
And on the level of the individual change, isn’t that the most important thing? Whether or not we should have done it in the first place?
That’s kind of the lesson from ATBGE… that some things should not have been done, regardless of how well they were executed.
In order to understand whether a thing should have been done, we need to measure different criteria than whether it was executed well. Instead, we need to understand how well it delivered on the reasons we gave for doing it in the first place.
Unquestionably, this question is harder to answer when we don’t have good reasons in the beginning, and the risk of discovering that can be off-putting.
But it’s really not a bad place to start shining a light. Sometimes we are too eager to please – we are asked to do something, and we go ahead and figure out how, without spending enough time wondering if we should. In practice, any time we do something significant we should be willing to think about whether it’s worth it.
The process of approving the Change should be when we check if we think it is worth doing before we action it.
The evaluation at the end of the Change process should be the opportunity we take to check whether or not we were right.
Hopefully, by building maturity here we can create a virtuous cycle where we start to think about our Change decisions in terms of how we would measure their success, because nothing makes evaluation easier than a set of well expressed criteria for success… And meanwhile the mental shift to thinking in terms of expressing service value builds maturity.
Either way, it’s got to be better than a cockroach latte.