
An Infrastructure for User Research
Background
This work focused on introducing and embedding user research within Care Quality Commission, a regulatory service that had previously failed its Alpha assessment.
The product was a legacy Annual Return used by adult social care providers. It was widely disliked, technically fragile, and shaped by years of accumulated constraints.
​
The department had a number of subject-matter experts, but insight, decisions and ownership were spread across teams, making alignment and shared understanding difficult.
​
Approach
My roles was to combine delivery with capability-building.
Alongside generating actionable insight to support Alpha, the work focused on counteracting organisational silos through an inclusive research strategy, creating shared visibility of user needs, and establishing lightweight, sustainable research practices.
This included validating existing assumptions with quantitive data and ensuring insight fed directly into design and product decisions within a technically led, multi-disciplinary environment.
​
Existing personas were reviewed, updated and validated against quantitative data, and gaps in research practice were addressed by establishing basic protocols for discussion guides, analysis and storage of findings.
​
Challenges

The immediate challenge was not only to generate insight and guide design, but to establish trust in user research as a credible input to product decisions, with the goal of CQC hiring a full time researcher in the future.​​
The work took place within a number of constraints, including a technically unstable prototype, legacy questions with complex dependencies, and an information architecture and taxonomy that affected both usability and data forecasting.
Existing research materials consisted of hours of recorded interviews, but without discussion guides, transcripts or any structured means of analysis.
​
Many of the legacy questions were tightly coupled to downstream data forecasting, meaning changes to wording, structure or ordering had implications beyond the interface itself.
​
Ambiguous question design led to repetition and low-quality responses, with a direct impact on Inspectors. In addition, the Annual Return was completed over several days by time-poor users, requiring research methods that could accommodate interruption, abandonment and partial journeys.
Research Approach
The research approach was aligned to GDS Alpha requirements, with an emphasis on making evidence visible, traceable and decision-relevant.
Research activities were documented and mapped to user needs, maintaining a clear line of sight between findings, design decisions and Alpha criteria.
​
Participant diversity was prioritised, including consideration of Assisted Digital users. Internal users were interviewed to understand downstream impacts on inspection and analysis.
Evidence was surfaced through recordings, clips, quotes and screenshots, enabling stakeholders to engage directly with user data rather than second-hand summaries.​
​


Existing personas were reviewed against quantitative data, and gaps in prior research practice were addressed by establishing basic protocols for discussion guides, analysis and storage.
Conditions such as technical instability and legacy data dependencies were treated as part of the research context and documented accordingly.
​
I initially explored the use of a diary study and ran a small pilot. This approach was found too effortful to sustain alongside existing workloads, and the pilot participant expressed that she preferred a structured, face-to-face interview.​
Given participants’ limited availability, frequent interruptions and the need for flexibility,
remote moderated qualitative research was selected as the primary methodology.
Insights
The research revealed consistent patterns in how providers interacted with the Annual Return, shaped by time pressure, technical realities and question design.
​
The Annual Return was not treated as a single task, but completed intermittently between other responsibilities, often over several days.
Progress was prioritised over accuracy, leading users to insert placeholder text simply to move forward.​



Because users were unable to reuse previous submissions or review responses before submission, answers were often drafted externally in Word and copied into the form.
Word count limits and opaque questions encouraged verbose responses rather than clarity.
​​
Ambiguous and overlapping questions resulted in repetition, increasing effort for users and creating additional interpretation work for Inspectors.
From Insight to Action
User research findings were translated into design decisions through close collaboration with the delivery team and explicit traceability between issues, recommendations and sprints.
Rather than producing standalone reports, insights were surfaced in ways that supported prioritisation and decision-making within the parameters of Alpha.
Key issues identified through research were documented alongside their impact on users and internal teams, then tracked through to resolution where feasible.
This demonstrated how research informed changes to question wording, guidance, and information structure, despite legacy data dependencies that limited the scope for change.
​


Research evidence was used to challenge assumptions in the existing solution, particularly where ambiguous questions and structural complexity led to low-quality responses and additional work for Inspectors.
​
In some cases, insights also informed decisions to defer change, with the rationale documented to make trade-offs explicit.
Journey mapping was used to make fragmented user behaviour visible to the team, highlighting that the Annual Return was completed over multiple sessions rather than end-to-end in one sitting.
This shifted conversations away from idealised completion flows towards more realistic expectations of interruption, abandonment and re-entry.
Insights were also fed directly into content design. Close collaboration with the Content Designer allowed findings to be reviewed iteratively, with changes assessed against real user behaviour rather than theoretical clarity.​


Where information architecture and taxonomy were contributing to confusion, research evidence was used to support a business case for further testing rather than piecemeal fixes.
Throughout Alpha, findings were shared through show-and-tell sessions using video clips, quotes and screenshots, enabling the wider team to engage directly with user evidence.
This approach helped ensure that user research informed design decisions incrementally and visibly, rather than retrospectively or in isolation.
Outcomes
The service passed GDS Alpha following a previously failed assessment.
User research was established as a credible and visible input to product decision-making, with clear traceability between findings, design decisions and delivery.
Best-practice user research and user-centred design ways of working were modelled and embedded within the team, improving shared understanding of users across roles and departments.


CQC subsequently made the decision to hire a full-time senior user researcher.
I left the new researcher with a comprehensive handover, including a roadmap for Public Beta.
Research materials, findings and documentation were consolidated into a structured repository, leaving the team with clear protocols and a sustainable foundation for ongoing user research.
