Overview and Rationale
1. The purpose of PSQS
1.1 Purpose
The purpose of the PSQS for its participants is to understand the quality of their support services, as used by academic units and other functions, and thereby drive continuous improvement.
Generating empirical evidence about service quality is fundamental. Perceptions of quality can (all too readily) be driven by anecdote, assumptions, political considerations, a vocal minority, misunderstanding or misinformed expectations. The PSQS provides a platform for collecting empirical data which contributes to a consistent and cumulative evidence-base of service quality over time.
It also provides participants with the means to benchmark their quality of service data against the average of analogous services at a number of broadly similar institutions.
The final purpose of the PSQS design is to raise visibility and dialogue about service quality across an institution, the intention being to foster a better understanding and appreciation of such services while also creating an incentive and means to improve them.
SDA/GIDE is the primary partner for the project management, technical design, web hosting, and analysis and reporting capability. SDA/GIDE is a member of and abides by the standards of Market Research Society (MRS) and ESOMAR and is registered with the Information Commissioner’s Office as a Data Controller.
1.2 Caveat
What the PSQS cannot do and which is beyond its immediate purpose is to determine the causes of differences in perceived quality of services within an institution or between one institution and another, whether attributable to remit, resourcing, leadership, management, efficiency, or other such factors. Within an institution, it may be possible for leaders and senior managers to know or determine such causes and act on them. Looking across multiple institutions (or at averages of analogous services) this is not the case: for example a particular service in one university may be differently funded, occupy a more advantageous position in the management structure, or be better led than in another.
2. Overall Approach and Methodology
The overall approach is straightforward:
- Each university works with SDA/GIDE to agree timings, content, customisations and any such matters prior to running the survey. Commercial terms (based on a common proposal and a menu of options) are between each university and SDA/GIDE.
- Although the question wording is identical across all institutions, as are the essential instructions, the survey is customised to each institution in terms of the set of units, their names or titles, their purposes, and any appropriate distinctive terminology (for example whether you call such services ‘professional services’, ‘support services’, or ‘central services’), your university’s logo and brand colours, the introductory and thank you pages, etc. The survey is therefore experienced as an entirely internal management tool.
- Over a period of about four to six weeks all staff at a university are surveyed across all levels and functions (academic and administrative), typically during the summer term though the exact timing and duration is up to each university.
- Staff members are asked to assess the services they received only on a direct, personal basis (ie not as a representative or on behalf of a team or unit they may be responsible for). The focus is on services received from a central Professional Service, though there is sufficient flexibility to accommodate some variations from that model.
- Upon opening the survey, respondents tick box(es) in a list of services for those they have had direct contact with. Alongside each service unit, they see a brief summary (about 10-15 words) of the capabilities and services provided to confirm and educate. They are then asked the exact same set of seven questions for each unit.
- Each university is also fully responsible for its own internal communications, promotion and utilisation of the survey and results.
- Reporting of the results follows quite shortly after the survey closes, depending on the analysis and reporting options chosen by each participating university. Data is gathered about respondents’ roles (academic vs administrative, their unit, etc) so that analyses also reveal perceived variations in provision to other units.
- Benchmarked results follow once all universities have completed their run, so may be provided somewhat after internal institution level results are available. Nottingham can provide advice and guidance to ensure the correct alignment of units for the purposes of benchmarking analyses.
3. Design Rationales
- Target population. During the original design of the PSQS we considered a range of options on how to define the survey target population, from narrow (heads of large departments) to global, encompassing all university staff at all sites. Looking at the narrowest target population, Heads of Units are arguably more likely to have political agendas or to deal with Professional Services only when service issues are escalated or in response to crises. They may not reflect the experience of staff who use Professional Services on an on-going basis who experience for example consistent high levels of service quality. Looking at the widest target population increases cost and complexity and may mean some staff have limited involvement or awareness of the full range of services. To avoid the potential for skewed results of the narrow approach, the survey was designed for use across all staff in a university, while also incorporating means to reduce complexity and educate respondents over time.
- Respondent organisational knowledge. Respondents may have unclear, incorrect or incomplete knowledge about their own university. There may also be uncertainty and misunderstanding about which unit is responsible for a given function, and staff may be unaware of the full range of services. The PSQS was designed to avoid the effects of such factors and serves an educative function by providing for each named unit a concise explanation of the service each unit or team is meant to provide.
- Reporting and taking actions. Some universities provide results only to heads of services and the Senior Management Team while others make results fully available to all staff. The approach that is right or best for any university will depend on its culture and practice regarding such management performance tools and staff surveys. Participating universities are therefore in full control of how their results are shared within their institution and benchmarking does not allow for individual universities to be identified. Participants can prompt responses to results and actions as they choose.
- Benchmarking. Most universities maintain a similar range of capabilities and functions (from accommodation to research support to timetabling) but have different ways of organising, managing and delivering those functions. The PSQS was designed to allow benchmarking of functions regardless of local management structures.
4. Experiences to Date
The experience of each university varies as lessons are learnt year, improvement plans implmented and the PSQS (which may go by different names at each university) becomes embedded in organisational culture and practice.
We can briefly summarise - at high level - the Nottingham experience, where the survey has run since 2013:
- An invitation to participate from the Vice-Chancellor is sent to all University staff in late May, highlighting the value and importance of individual views in seeking the highest service quality levels.
- The response rate for the last four years has been about 15%, with about 1,000 responses received. Each respondent evaluates several units and the total number of evaluations has been 5-6,000 per year – a substantial data resource.
- In October the results and reports are made available to all staff through a variety of means, including a Tableau viewer where staff members can see all results for any or all units (including open text comments), and compare those to results for all services and the benchmarked average for the analogous service at other participating universities.
- Heads of services are required to reflect on their own unit’s performance and to develop action or improvement plans as appropriate. The University Executive Board receives a report of outcomes, as well as summaries of the action plans agreed with Heads of Units.
- The PSQS results are also incorporated into other processes, for example Professional Service reviews, and have high visibility internally.
- The PSQS has increasingly become built into the organisation culture. It keeps Heads of Services mindful of the importance of a focus on service quality (and the ease of identifying failure to do so). For the Board (which receives results and action plans), it allows issues and problem areas - and indeed high quality provision – to be readily identified and to gain a better sense of the ‘overall health’ of services and the organisation
5. About the PSQS in 2022
The design of the survey will remain the same as in previous years so as to retain consistency and to allow for year on year comparability.
Design, hosting, project management, analysis and result reporting will continue to be provided by SDA/GIDE. Their own support services are consistently regarded as excellent and the product as being very good value for money by all participating universities
Surveys on the SDA/GIDE platform are designed to be responsive to the device being used so will work on mobile devices such as tablets and phones. As the survey has some long response lists, completion on very small screens is not recommended.
SDA/GIDE surveys are hosted on servers in a secure data centre with regular backups and recovery procedures.
Once the survey is closed each institution’s data file will be downloaded by SDA/GIDE and quality checked (eg removal of blank submissions, duplicates, incompletes etc). The data will be transferred to SPSS or similar software for analysis. Participating universities may choose to have a copy of their data file in order to undertake analyses themselves.
Costs for 2022
The basic costs for running the survey in 2022 are £2,850 + VAT and include:
- Online survey design and hosting
- Prepare and deliver survey data file for analysis
- Benchmark tables and charts in Excel/pdf format
A range of additional and reporting options are also available at additional cost including, for example, pdf reports with executive summary.
6. 2022 PSQS Kick-off meeting
All those interested in participating in 2022 will be invited to a kick-off meeting (to be arranged) to compare prior experiences, and to share issues and ideas. Some points from previous meetings include:
- Survey timing, duration and response rates. This year, start times will be variable to accommodate the upheaval casued by the pandemic.The longest running survey duration has been about one month. Response rates varied from 14% to about 20%.
- Response rate management. Monitoring response rates actively using the online facility provided by SDA/GIDE is important. This shows when numbers begin to trail off, indicating the benefit of a reminder to prompt completion, as well as which units may not be responding, suggesting that (if communications were cascaded rather than sent globally) that the Head of the unit may not have forwarded the invitation or not given a sufficient level of encouragement or rationale for participating. Some institutions sent weekly reports to heads of units, giving them the number and % of their staff who had completed the survey. It may also be effective to send a single report showing response rates across all units to Heads, so that units with lower response rates may be additionally motivated.
- Order of units evaluated. Following a discussion on possible biases, it was agreed to change the survey so that, regardless of the order in which the units are listed, they are completed in random order.
- Respondent behaviours. SDA/GIDE data captured about how respondents proceed through the survey, which showed that respondents take 2-3 minutes to complete the opening questions (i.e. introductory text, indicating which unit they work in, the type of role they hold, etc.) then about one minute for each service evaluated. The time taken per service reduces the higher the number of services evaluated, presumably as people become familiar with the questions. Unsurprisingly, the data showed much variation around these averages - obviously people can be interrupted in the middle of completion and some will spend more time thinking about the answers and some will speed through the survey. Also, the data showed that only 13% of respondents did more than 10 evaluations, with most institutions having an average of 4-6 per person.
Using this, it is possible to include a statement in the survey along these lines: ‘How long the questionnaire will take to complete depends on the number of services evaluated. Evidence from previous years suggests that on average it will take 4-15 minutes but possibly longer if more than 10 services evaluated.’
- Analysis, Reporting and Action Plans. There is wide variation in responses to reporting, both the institution results and the benchmarking. Most provide a tailored summary report to their Executive Board. Some, including Nottingham, also produce their own results explorer tools using Tableau to provide wider access to results. Nottingham provides all University staff with access to all results for all units, including open comments. Most require some form of ‘action plans’ from service units, either by exception (eg for units below their relevant benchmark, University average or whose results are poor or declining) or universally.