Welcome to Measuring the Immeasurable, a newsletter helping arts and culture leaders understand the relationship between arts and public value. I have been thinking about what comparison means in arts evaluation, and a pair of stories comes to mind. In my view, data rarely enables us to compare one arts organisation with another. To try to do so is hubristic. It asks too much of data. I hope these stories are not only interesting but useful.
I have seen some modest subscriber growth over the last few weeks, which I take to indicate that these posts are helpful. If that’s the case, please consider sharing and subscribing.
In 2019, I took my parents to see Waterlicht, an immersive, site-responsive work as part of the Fremantle Biennale. Waterlicht was ‘about’ climate change, and I took my parents there with this in mind. You may remember this was in the wake of the Black Summer bushfires. The work was captivating, and in the cool dark of a windy Fremantle evening, gave a sense of what climate change might be like. It was immersive right up until a cheery volunteer intercepted us with an iPad.
As I work in the arts, I gestured towards my parents to complete it. My mother then gestured towards my father to complete it. He then dutifully completed the survey still in half-reverie, bathed in the blue light from overhead. Intercept interviews are basically the only way to collect reliable audience data for outdoor public art projects as there are no doorways to monitor and no addresses to email. Still, I couldn’t help but reflect that in order to have our experienced assessed, we had to have that experience interrupted.
A few months later, shortly before the WHO declared a global pandemic and the Black Summer bushfires were promptly forgotten, I attended another immersive, site-responsive work. This one was a small-group, ticketed performance. I went with a few friends — we booked out an entire evening to ourselves — and were guided through another dream-like art experience.
This time, we weren’t interrupted. After the performance we were invited to sit and reflect on the show, and were given a platter of bread, dukkah and olive oil while we discussed. I appreciated this gesture. Rarely do performing arts organisations create structured opportunities for post-show discussion. After we had been discussing for a short while, we were again approached by — you guessed it — another volunteer with another iPad. Only in hindsight did I realise the dukkah was there to woo us.
We were all encouraged to fill in the survey. The survey was broadly similar to the one my father filled in for Waterlicht. It used the same software and asked similar questions from a shared framework. We dutifully passed it around between us, conscious of time and not wanting to interrupt the flow of conversation. I wiped my fingers on my jeans, not wanting the iPad to have a sheen of olive oil on it.
In conversations with arts organisations, I’ve often told these two stories separately as proof that surveys aren’t so intimidating. They can be administered in a way that is minimally interruptive. In both cases, data was generated and funders couldn’t argue that the work hadn’t been done. A generous interpretation may even be that the survey helped us to make sense of our experience. With the appropriate context and consideration, the data may have been helpful to those organisations1.
However, when considering these two stories together, something immediately becomes obvious: Their results between these two surveys could not possibly be compared. How could they be? The methodology and structure around each survey was too different. Was I rating my experience of the work, or the post-show conversation? Did a moment of unacknowledged annoyance cause my father to lower the score slightly, or did the presence of the cheery volunteer raise it? In one case, we were cold and thinking about dinner. In the other, we were warm and full of toasted bread.
Comparison, though problematic, is usually what funding bodies and governments strive for when developing standardised frameworks for data collection2. The ambition is understandable, as funders have to make difficult decisions about where to allocate support, and the promise of a standardised system is appealing. However, standard frameworks aren’t particularly useful if the way data is collected varies significantly from organisation to organisation. Aggregating data in this way would be like trying to design a restaurant menu by surveying 1,000 people on their food preferences. The best case scenario is a motel buffet. Similarly, an Indian restaurant may benefit from surveying its customers, but their data is unlikely to be useful to the Italian restaurant next door.
I am not against surveys or frameworks. They have their place. In my work I have designed a few. However, these surveys and frameworks have always been focused on the organisation and its needs. They don’t promise to enable one organisation to directly be compared with another, or even one production or work with another. At best, these frameworks provide a shared language for negotiating differences and reaching understanding. Data cannot compare two works, but people can.
I don’t know this for sure, of course, as I have not worked with either of the organisations in question. I would appreciate any emails or comments from arts administrators on how you use survey data in day-to-day operations and programming.
A post for another time, but I am not against frameworks in principle. It is possible to have shared frameworks, but in pursuit of shared understanding rather than standardisation of practices and procedures.
I am an experienced audience researcher and it’s a tough gig. My preferred surveys are ones inside an exhibition where I can talk to visitors as they leave an exhibition/ gallery (rather than exiting the museum) or actually tracking visitors to observe their response to an exhibition without any interaction. Exit surveys are much tougher because once people are ready to leave the museum they are always rushing for transport or their next destination or their family has had enough Museum for one day. It’s hard to compare museums except on a few points which are important - membership, visitor reach
(Local, interstate or international) , repeat visitation, type of collection. I do think that you can compare by category - house museum, regional museum, maritime museum, natural history museum etc. It would be great to learn from each other and share ideas on audience engagement.