"Design" is such a broad term and can take so many forms that there’s never going to be a one size fits all approach. The work we do is based on brand, so it’s primarily graphic design I’m referring to in this piece.
Because design is abstract and difficult to isolate, it’s hard to objectively measure its impact. And it’s certainly true that some projects are much easier to measure than others. For example, consumer packaging projects tend to lend themselves very easily to measurement. You have an old pack with sales performance data vs a new pack with sales performance data. Providing you control for everything else (e.g. advertising, sales effort, market conditions etc.) you can easily come to a reasonable conclusion about the impact of the newly designed pack. But, what about other projects, where the lines are more blurred? Corporate branding, internal comms, employer brands, digital design - in these cases design may be applied in a piecemeal fashion, or work in conjunction with other initiatives, and that makes it harder to evaluate.
But, that doesn’t mean we shouldn’t try. And it’s a very healthy mindset to have when embarking on a project. Having worked in the industry for 20 odd years and being a big advocate of the Design Effectiveness Awards, here are my tips on how to prepare to measure the impact of the ‘design’ you’re about to deploy.
1. Get your agency on side
If the agency's not up for it (for whatever reason) then it’ll never work. They have to be as invested in proving that the work works as you are. The best design effectiveness projects are the ones where the client and the agency collaborate closely to create compelling entries. This is the most important point and I can tell you from experience that you’re wasting your time if you’re not aligned on it.
2. Broaden your definition of design
This doesn’t just have to be graphic design. It can be design thinking in a broad sense. Remember that the strategy you adopt in attempting to solve a business challenge is as much design as a logo rendered on a page. By contextualising this strategy within a commercial setting, you may be able to demonstrate savings or economies which play into design effectiveness.
You must understand what you are trying to achieve. In the first instance, this should come from the client and may be at quite a high level e.g. ‘contribute to the business objective of XXXX’. Generally, these are too broad for you to measure pre and post project. You’ll have to spend some time breaking that down to create sub-objectives that are more directly applicable to this project. Think laterally - what other micro-objectives can you carve out from the main business objective?
4. Allocate budget for research
Carve out a small amount of money from the budget for dipstick research. It doesn’t have to be huge, but you need some sort of objective measure about recall, impact, or awareness. This should be focussed on the central part of the project and should be done pre-activity and post-activity so you can see how far the needle has moved.
5. Give it time to breathe
Don’t rush the analysis. Often results are rushed out in too short a period to make an entry deadline. Give the activity time to bed in and take effect. I’m always sceptical of periods that are less than 12 months and think a year offers a well-rounded trading period and irons out seasonality.
This is just a start, each project will have change this approach in nuanced ways but these broad rules are foundation for a solid start.