What Counts as Evidence in Healthcare Service Design?

Evidence-Based Service Design

From Chapter5, Design for Care

Inpatients spend most of their time during hospital treatment and recovery in a bed, often in a shared room. It might be obvious to anyone who has even visited a hospital room that the experience of resting and waiting in this sterile, artificial environment could be greatly improved. The everyday experience of the inpatient constitutes encounters with the physical environment—from building architecture to room layout and furniture—and the presentation of services within a hospital and specialized clinics. Most of these environments leave a lot to be desired, and designed.

What if it could be proven that people heal demonstrably faster and better in beautiful, well cared for physical environments? Architecture professor Roger Ulrich’s research into the direct beneficial health effects of natural and harmonious surroundings during care provides hard evidence to support this theory.
Ulrich’s 1983 article “View Through a Window May Influence Recovery from Surgery” was the first scientific recognition of the capacity of nature to accelerate healing in the clinical environment.4 Taking a patient-centric perspective from the outset, Ulrich demonstrated that natural scenes and emotionally supportive artwork facilitate significantly faster recovery and shorter hospital stays. Access to real plants and natural scenery generated the best results. Artwork content, not quality, makes an empirical difference. Scenes of nature, placid landscapes, and animals measurably quickened the healing process.

Ulrich also formulated the theory of supportive design, proposing that healthcare environments can improve outcomes directly by promoting stress reduction, buffering, and coping. Chronic stress severely limits the rate of recovery and delays the release of patients in acute and critical care. Supportive design focuses on the physical and interactive characteristics of the hospital to reduce the occurrence or impact of stressors such as intrusions, constant noise, loud or unwanted sounds, and cluttered or sterile visual fields. There are a multitude of passive service touchpoints for increasing comfort, ease, and even intimacy. The conventional models of systems workflow in service processes can be detailed to show anticipated patient sensitivity and responses to touchpoints (both figurative and literal points).

Evidence for Design

Ulrich’s research originated EBD in the early 1980s, which started in hospital architecture and expanded to care processes, practice interventions, and device design. As in evidence-based medicine, the current “best evidence” available in literature and practice is recruited to support decision making, with the explicit goal of optimizing defined health outcomes in measurable terms.

Evidence-based medicine (EBM) relies on the highest appropriate standard of evidence for a clinical decision. The cultural and regulatory adoption of EBM has created an environment of risk and uncertainty reduction that demands a similar clinical standard of evidence for any research-based decision, including IT or services. For well-known diseases, the randomized clinical trial (with blind control and experimental conditions) represents the “gold standard” of evidence. Needless to say, that standard does not apply in design research. But it conditions the culture in which design decisions are made.

What is the appropriate level of research rigor for design decisions for health? A major goal of EBD is to ensure research is transparent and replicable, so that other institutions can learn from the study. A secondary aim is that of communicating the effectiveness of service design and research methodologies through institutional research, demonstrating validity and appropriate applications in the professional literature.
The interface of a health records system could be measured not only through formative usability testing but for its actual performance in clinical service. Progressive hospitals with strong informatics departments, such as the Children’s Hospital of Philadelphia, measure and track patient outcomes as a hard metric to determine the degree to which defined health measures are improved by their IT investment. If design research methods remain proprietary or applied only as commercial engagements, the value of design-led research will not be fulfilled on a wider scale.

EBD is not just the gathering of user research to inform design decisions, as in a user-centered design process. It is a rigorous equivalent to the careful application of scholarly evidence in informing care decisions, and generally includes the following activities:

  • Reviewing current and retrospective research to identify precedents, mature findings, and prescriptive guidance from relevant studies.
  • Prioritizing and balancing the literature basis with primary data collected from actual patient data, subject matter experts, and professional observations.
  • Advancing theories and hypotheses to support observations, and structuring evaluations to test outcomes of design decisions.
  • Measuring outcomes following implementation, and assessing theory validity and any gap between observations and hypotheses.

What Evidence Counts in Healthcare UX?

Few design disciplines face the risk position of architecture and device design in health institutions. Devices must be approved by FDA review committees, and architects certify their designs and are responsible for the plans followed by builders. Software user interfaces designed for standard operating systems are not subject to equivalent certification, and even EMR systems, for the management of data responsible for clinical decisions, are not certified by open review.

Does it make sense for UX design to adopt evidence-based principles in healthcare?
Design trade publications and UX blogs show growing interest in EBD. The typical claims made for evidence (largely qualitative) do not build credibility when evidence is merely based on user observations. Usability testing is an essential evaluation method, and is especially useful when applied across the installation lifecycle of health IT. Yet usability testing cannot measure health outcomes, but user inputs and responses to interactive systems. In a sociotechnical context, it is supplemental but not sufficient evidence. The quality of evidence for assessing interaction design relies on the behavioral measures of task completion, error rates, and time on task. User satisfaction and, occasionally, learning rate measures are collected as evidence for design decision making. Yet no websites make the claim of improving one’s life and health following the interaction.

Through iterative research on prototyped sociotechnical systems, better design can demonstrate outcomes based on measures of patient health. Measuring outcomes in response to service change requires longitudinal tracking of patients across cohorts large enough to measure an aggregate difference. This is unlike any process in Web or interaction design. It suggests a gap in health service design research methods. When lives, liability, and dollars are at risk, the due diligence of quantified evidence is necessary to ensure decisions are supportable across the organization.

Evidence for Design Outcomes

In medical practice, scholarship, care planning, and design decisions, “evidence” differs significantly—not only the types of evidence, but its definition and collection, quality evaluation, controls, presentation, and publication. For clinical decision making, typically accepted evidence ranges from randomized controlled trials (gold standard) to expert studies (weak), including a variety of types of evidence (observations, imaging, measured variables) relevant to diseases, biological responses, and applications to procedures, interventions, and public health.

Traditional operational studies rely on “outcome measures,” but the measures taken often assume a causality between interventions and outcomes—a causality that may not always exist in a complex reality. Some clinics invest heavily in biostatistics research to infer outcomes associated with interventions, a complex but increasingly necessary undertaking.

Evidence by type—from more to less rigorous—might include:

  1. Controlled human interaction experiments; mixed method studies (triangulated methods)
  2. Patient observations, physiological measures, field experiment data (strong empirical)
  3. Robust sampled ethnographic data; controlled usability interaction studies (empirical)
  4. Small sample interviews, “hard” (rigorous, evidence-based) case study, extrapolations from field research
  5. Expert opinion, heuristic or multiperspective assessment

With the growth of the Web and the number of people needed to build it, the UX field has expanded well beyond the original human factors community that started the field. The widespread adoption of “user experience” glossed over many of the original distinctive differences between practices.. Although this merger has gained a broader acceptance of the practice, the general acceptance is less relevant in high-hazard, high-reliability settings. Today research professionals constitute an ever-shrinking proportion of the field.  A different “standard of care” is necessary when designing a system for clinical professionals or patients, rather than for consumers.

EBD established a rigorous research methodology for healthcare applications understood in principle by practitioners. But not every informatics user interface or health website requires strong EBD validation. An insistence on EBD standards in the early stages of a design program could significantly inhibit the innovation value from exploratory research. Allow formative design and organizational learning to at least reach a stage of development where summative evaluation on prototypes and service concepts makes sense.

Comments are closed.