Unknown measures
I read the Government Communication Service's Evaluation Cycle, so you don't have to. TL;DR: it's good but perhaps a little overcomplicated.
I hope you enjoyed our brief fling with "fake Spring". The daffodils, blossom and some clement weather made me think winter was almost over. Wrong. Winter is like the evil protagonist of a horror movie - always coming back for at least one more scare. For a few blissful years, I could easily handle the early March freeze, knowing I'd be escaping for warmer climes with a trip to Austin, Texas and some SXSW action. Sadly, the terms of my gardening leave don't extend to all expenses paid trips to big tech conferences. Sad.
During my three-year stint visiting SXSW Interactive, I'd always look out for sessions on measurement. Partly because I love a bit of measurement chat and partly because measurement is something I always assumed other people were doing better than me. But in those three years at SXSW, and at any other conference I've attended, I never heard anything revolutionary. Everyone else follows roughly the same model:
Track what you did (three videos, five social images, media kit, CEO post on LinkedIn, etc.)
Track the comms metrics generated (pieces of coverage, impressions, engagement rate, whatever your soft metric poison)
Finally, and most importantly, measure what impact that had on the business (sales, NPS, favourable regulatory environment, etc.)
It sounds simple - and starting from a simple base with measurement is often necessary because any form of evaluation has a magnetic attraction to complexity and obfuscation. More metrics, more KPIs, more EVERYTHING.
That need for more often stems from a desire for universality. The more teams and disciplines your framework needs to cater for, the more inevitably complex it becomes. But that doesn't have to be the case. An example of this “broad but usable” approach to measurement can be found in the Government Communications Service's "Evaluation Cycle" document.
At its core, the GCS Evaluation Cycle has six (6!) stages. Like any good consultant, I generally do things in threes; at least six is two threes.
Inputs: insight, planning and asset creation
Outputs: what your audience sees (this is a much better summary of outputs than my waffle above)
Outtakes: what your audiences thinks and feels as a result of your comms activity
Outcomes: what your audience does as a result of said comms activity
Impact: what you deliver vs objectives
Learning and innovation: what you will do differently next time
There's overlap here with the classic three-stage measurement approach. Those three stages even have the same names used by plenty of agencies - Outputs, Outcomes, and Impact.
Four factors from the GCS Evaluation Cycle particularly stood out for me. Firstly, the fact it codifies the insight and planning process is a fascinating addition. This stage specifically spells out the need to set clear objectives for any comms activity, alongside guidance for best practice planning, particularly around inclusivity.
The inclusion of this initial stage acknowledges that measurement often comes with a lack of rigour around objectives. People often move campaign goalposts to suit the story that puts their campaign in the best light. Adding this extra stage to your measurement framework may be an acceptable trade-off if it guarantees more rigour around objectives.
Secondly, the example metrics for Outtakes provide a different view from how I've traditionally tracked and measured campaigns. Initially, I thought Outtakes was an intriguing addition - essentially providing a sentiment snapshot as standard, instead of including it where relevant or bundling it into qualitative surveys as part of tracking impact.
But the GCS also includes metrics such as engagements, click-through rate and view-through rate in Outtakes. For me, these are outcomes - particularly CTR and VTR. Saying your audience feels positively toward your campaign because they click through to a website feels like a stretch.
This leads to my third takeaway because the different definition of Outtakes stems directly from how the GCS defines Outcomes. Outcomes in their framework include metrics that I often bucket with impact.
Sign-ups or applications, behaviour change, removing barriers to behaviour change. The Impact for GCS is more focused on benchmarking with previous comms activity. Again, as with the Inputs stage, most agencies take benchmarking as read. If you can compare, you do - it hasn't traditionally needed a separate bucket.
Much of that discussion is down to personal preference - there's clearly no right or wrong. It's up to you and/or your client. You're all good as long as you're consistent in your terminology while you use your framework. As already mentioned, I have a predilection for fewer options - often, clients come to agencies for some version of making the complex simple. But the flow of the GCS document is compelling and eminently sensible - you'd imagine it's beneficial for Government departments taking their first steps into campaigning.
Finally, what I loved most about it and will be stealing for future measurement-based work is stage six - Learning & innovation (although I hate the ‘I’ word, so that's out). In previous projects, I would build the learning section into another part of the framework, most likely a set of principles or guidance accompanying the process of putting measurement theory into practice. But I love the cyclical nature of the GCS model, with Learning feeding directly into Inputs and feeding through everything.
The cyclical nature of the GCS measurement speaks to an essential factor when approaching communications measurement in any environment. It's as much of a cultural challenge as an empirical one. Yes, you need accurate data and mechanisms to track and measure effectiveness. But if people don't use the framework, or use it but aren't honest with themselves as they use it, it's pointless.
In my experience, the single most significant cultural challenge with measurement is candour. In other words, accepting and celebrating the fact that not all campaigns are potential award winners. The prevailing narrative in tech was of "failing fast" and "embracing failure" for a long time. That translated into plenty of other places, but not into comms. Failure, particularly if you're agency-side, potentially means the sack.
But it is simply not possible for every campaign to be a massive success. That's not how life works. You learn way more from your less-successful work than any other. I've won plenty of pitches and worked on some great campaigns, but the pitches I've lost and the work that hasn't landed arguably contributed more to where I am now.
As John Maeda put it (quoted in Salmon Theory), talking about failure, "miss[es] the fact that failing isn't the goal. "Recover fast" and "learn from failure" matter way more." And that's why measurement frameworks need to be underpinned by learnings and candour - if we don't learn from our failures and sell every piece of comms activity back as a resounding success, we're doomed to make our mistakes over and over again.
So, while it's great to see the Learning box included in the GCS's evaluation cycle, that's only the first step of the journey. The steps required to put rigorous measurement and evaluation into practice are trickier but essential to ensuring evaluation becomes the cornerstone of effective communication in your organisation. Without that program of embedding and culture change, you're ensuring your measurement framework will end up like February's "fake Spring" - a short-lived facsimile of a brighter future.