Some of the gaps in practice are starting to show between design-led everything and design research. Don Norman has been advocating an evolution in design thinking and education with a stronger role for evidence. Recently, Don articulated a scale of levels of rigor in design practice, ranging from designerly intuition to math and engineering models. I have added “inputs” to this list so that we might see what kinds of evidence relate to each mode of reasoning, as information objects used in practice.
Design Reasoning Inputs and Objects
1. Craft-based, sharply honed intuition
2. Rules of thumb: heuristics
3. Best practices (case-based)
4. Design patterns
(modified to account for the current problem)
5. Qualitative rules of practice
6. Quantitative rules
7. Computer models
8. Mathematical models
Whether or not you agree with these, there seems to be a split at 5, where design patterns are not direct evidence-based (inferred from observation) What do you really know about evidence-based design? Is it a developing design practice trend, or more of a mode of design research that drives design decisions? Both?
Evidence-based design (EBD) is being reframed now, in different ways, in design education, research and practice. I’d say “it’s not what we think it is.” This is not a monolithic practice that threatens design traditions. In practice, understanding the contributions of science and standards of evidence contribute to better design decisions in any complex sociotechnical system. Evidence doesn’t diminish creative or interpretive design approaches, but perhaps the concern is more that a turn toward evidence alters the balance of power designers have worked so hard in the last decade to achieve. Design thinking and co-creation practices have been endorsed as powerful allies in business and social innovation, and perhaps in some ways a strong evidence approach competes with co-creation. I would suggest this is a necessary return to balance of the reality of the purposes of design thinking – to ensure effective and desirable products and services are developed based on an honest appraisal of the humans in the social systems of use. These are customers, end users, organizations, marketplaces. Evidence returns some power to the reality of current needs and functions, perhaps at the expense
Most systems I’ve designed have included both types (or multiple methods) of design research. Certainly when working with development teams and product managers, the “harder” evidence – user data – is always more convincing than generative or conceptual design cases.
There’s been a long tradition of evidence-based design in healthcare, based in studies of environmental design and architecture in facilities and care practices. Its major proponents have been doing safety and systems-oriented research and intervention since the early 1980’s, and if you search “healthcare design” these are the precedents that show up. (See the venerable Center for Health Design) CHD studies have made a huge difference in quality of care and patient safety over the last 30 years. Design enhancements such as in-room artwork, access to natural scenery and living plants have resulted in decreased length of stays, improved service experience, and other softer outcomes, such as lessened anxiety. We can measure these things and make a convincing case for expensive and significant facility changes (to make any change to hospital environment is expensive, as it must be durable and repeatable option for all patient rooms or locations). The now-current knowledge that “single patient rooms lead to better health outcomes” is both patient-centred and evidence-based. But hospitals would never accepted the expense of essentially doubling the number of rooms based on patients preferring it. They do measure hard outcome data, and outcomes are a major design criterion.
Evidence is not necessarily a positivist position, even if the tradition of EBD tends to be so. Evidence is merely “based on data” as opposed to expert judgment or collective agreement, which are interpretive modes. In fact, collecting interpretive data from users, rigorously, is evidence. Patient narratives are a type of evidence. If we don’t collect data, we’re at significant risk of interpretive risks in making design decisions that affect safety, human welfare and finances. So just as scientists argue about the meaning of data, so ought we.
Evidence and its alternatives are not an either/or proposition. In fact, there is no “or” to be found. There is little risk of epistemological contamination by adopting the value proposition for evidence in design.
In systemic design there needs to be a balance of methods and perspectives, as complex systems (at least) are many-sided and many-functioned operations which no one person can understand in whole. Every contribution to knowledge helps.
In healthcare, the trend that is balancing evidence-based care is patient-centred care. But very few organizations have produced meaningful approaches that all understand as patient-centred. There’s pretty good agreement around “levels of evidence” and research standards, there’s almost none for patient-centred care. The definition of PCC seems to be getting fuzzier, not clearer, as more stakeholders adopt a patient-centred view, and then are stopped by the uncertaintly, perhaps, of how to best implement the value in real care setting.
PCC is not patient experience, or patient satisfaction, PCC is interpreted very differently between clinical professions, and differently across institutions.
Are some hospitals advocating a trend “away” from evidence and toward “patient centricity” when they don’t agree what that is? And when they get closer to it, PCC may tend to blow up the business model and workflows.
Unless design thinkers make a culture out of evidence, it will become a complementarymode driving research, and helping designers make “unassailable” design proposals in complicated and risky situations.
If non-clinicians actually look at how the evidence behind medical practice is treated, they’d realize that no expert “lets the evidence decide.”
The reliance on clearly established precedent and the “literature” is a starting point for clinical decisions – diagnostics, medications or surgical therapies are complex decisions and require the best known answers before expert judgment is applied. The risks are too high not to. Yes, in hospitals residents execute much of this and they don’t build long-lasting personal relationships. They are residents. But nurses, who have championed patient-centred care and tend to practice it philosophically even if it’s not standardized, demonstrate in many ways affective and interpersonal qualities we associate with PCC.
Certainly clinicians who actually work in healthcare are not going to wish away evidence supported decisions anytime soon. When we seek to deliver design value at organizational and social/policy levels, we’re dealing with high degrees of complexity and the difficulty of sustaining a presence long enough to make a difference. Gaining agreement on courses of action is critical in these domains. Evidence helps us build the case for stakeholder agreement, especially across strongly contested views and positions, where power is involved or people have possible losses.
But service design and whole system (integrated IT and process) design require both evidence-based and x-based. And I’d like to hear more of what those other “x’s” are, because I never saw a conflict between research-led design and exploratory design. They are usually different stages, but I will say that in corporate work I’ve found you rarely get paid to explore. In design school our students usually want to just explore and save evaluation for “later in the career.”
If we want to be trusted to work with mission-critical services and integrated systems, we need to get beyond our own prejudices of what these categories might mean. We have to read studies, learn from scientific research and design research, from our peers and dialogues. And I would make a case for integrated methods and multi-perspectives.