Much of our analysis output is either in a report form or a dashboard, sometimes both. This discussion was to understand people’s experiences, preferences and any difficulties they encounter.
We didn’t have quite the mix of users to the group as we’ve had in other groups but we had representatives from the CDU Data Science Team, Applied Information, Performance and Research which was still very useful to share our experiences of producing outputs like dashboards and reports.
The creation of dashboards has been successful within the Applied Information team and works well when designed for a key customer who is often also the data owner. The process of dashboard creation has been iterative and there has been a lot of engagement as a result. Often the dashboards have been designed to display all the charts on one screen which is slightly different to the dashboards produced by the CDU Data Science team, including the Patient Experience dashboard, which has multiple pages.
As I described the ideas around HOPE (Health Outcomes and Patient Experience), some of the feedback was that there is a potential of “over selling the data”, particularly around pulling in too much information. The same concerns were around multiple page reports. This resonated with one of the things I’d learned from a talk by David Spiegelhalter over the last year, which I shared with the group: he suggested having short headline analysis, followed by more detail for those who are interested, followed by thorough technical and detailed information aimed at those who are like us.1
The tendency I have had, in creating dashboards has been to both second guess what the end user would want and also apply what I would want, and my interest as an analyst is often much more detailed than others.
Because the CDU Data Science Team will be using and analysing information from HR, Clinical and Patient Experience there was concern expressed in any overlaps with existing reporting like ‘Ward to Board’ generated by the Applied Information team. This is a very important point it can be too easy to replicate the same work, just a slightly different way. We recognise that previous dashboards should not replicate current reporting and performance but more of an analysis as to the different data interacts For example, in the analysis between the staff survey and well being key messages came from the statistical analysis which was:
Appraisal rates do not correlate with any of the ONS questions relating to well being.
How happy a person reports to be does not correlate with any of the HR proxy measures of workforce satisfaction.
There is a strong relationship between reporting feeling Anxious and Age Band, but this is not replicated by Directorate.
The HR proxy measures of Turnover and Sickness could be used as an ongoing measure of staff well being, which is only recorded on a yearly basis, due to the strong correlations to reporting feeling Satisfied or Worthwhile for Directorates and, to a less extent, by Age Band.
Dashboards, on the other hand, are a good way of monitoring processes over time.
Any analysis should start with the question of what is it that you want to know. However, when designing dashboards, particularly for a general audience, the “what do you want to know” is guessed by the analyst. There were discussions about how empowered various colleagues may, or may not be, in asking for what they want, correcting us if we interpret that incorrectly and feel comfortable questioning our methods or results.
A question was posed, “Do we ever go through the data with the requester?” or do we or the requesters expect “answers to just jump out?”. Personally, I would say yes, I’ve erroneously expected others to see what I see with data requests/visualisations. Likewise, I have equally been on the receiving end of receiving data that hasn’t been analysed and whilst I love data and numbers, I feel overwhelmed by tables of numbers as they require contemplation (and often benefit from visualisations). Nothing from a table of numbers “jumps out at me”.
Dashboards that have worked well from Applied Information have been when the main “driver” has been outside the team rather than “we think” this is what is required. It can be hard, as an analyst, not to apply your own thinking and curiosity to dashboards for things that may not be as relevant or useful to others.
Other things to consider are how long a dashboard is useful for.
The purpose of a dashboard is often for discovery. There was some honest feedback (from analysts!) that there can be fear of breaking anything when just selecting options. Often the feeling is that once things are set up, it’s best to leave it like that in case we can’t get back to it. This is indicative of the complexity, perhaps, of some dashboards.
This was a great discussion between analysts, one which we rarely get to have, but would no doubt benefit greatly from having others input into, particularly end users/customers. Consequently, we will revisit this discussion again.
I tried finding the talk I heard this in but it’s harder to locate references in talks than I realised! I also found I got distracted as the talks are really good so I’d recommend searching and watching talks by David Spiegelhalter for more information on how to communicate statistics effectively.↩︎