In Books


Design for Care

We Tried to Warn You

Team Design

Remembering our friend Ranulph Glanville

Ranulph Glanville presented his last major talk as our keynote speaker at RSD3 in Oslo, October 16, 2014 – the talk, and Ranulph, was historically rich, colourful, inspirational, intimate and witty. Dr. Glanville passed away Dec 20 after a brief stay in hospital following a rapid turn in the progression his cancer.  Although he studied under Gordon Pask for his first Brunel doctorate in 1975, I think of Ranulph as a first-generation systems thinker, since he was doing relevant work in the 1960’s.  He was an architect, cybernetics scholar, professor, and designer.  He was very active in the cybernetics community, the 5-year term President of the American Society for Cybernetics and a lifelong researcher and brilliant raconteur in the scholarly worlds he participated in. Ranulph wrote at least 170 published papers (other accounts suggest over 300), and was a significant influence on the new generation of systems and cybernetics scholars.

His posted CV is informative and shows a huge breadth of work. His academia.edu page offers a number of papers, some of his most recent, but remained impersonal and uninformative, reflecting his disregard for the so-called social era of communications.  Ranulph’s Wikipedia entry looks impressive at first glance, but is woefully incomplete and sparse. Ranulph was a compelling live orator, and these summaries do not share links or references to his lectures and influential presentations. Ranulph did not participate in the contemporary social media frenzy (something he seems to share with most leading cybernetics scholars), yet he was quite social in the shared world of actual human activity. He and his wife Aartje freely spent hours with the RSD organizers, speakers and colleagues while in Oslo, telling stories, challenging ideas, meeting attendees.

I first heard of his passing from Ray Ison, president of the ISSS, who also wrote: “According to his wife Aartje, Ranulph died relatively free of pain and suffering. Over the past year Aartje and Ranulph have faced his fatal illness together, with admirable courage, and as undeterred in their regular day to day life as possible, traveling together, adding an extension to their home, and working tirelessly to support and enrich others.”

 

Caring for the Future: The Systemic Design of Flourishing Enterprises

Presented at OCADU Research Rendezvous, Nov 17, 2014

Abstract

Human commerce utilizes the most significant share of natural resources and produces the largest aggregate impact on the earth’s environment. As a consequence of modern employment and work cultures, commerce, corporations as opposed to governments, also construct much of the social contract and social organizational forms in developed societies. Sustainable development movements to conserve resources and to democratize or enhance organizational practices have called for culture change or transformation.

However, these approaches have not yielded results that will significantly enhance human flourishing in the face of globalized commerce, which has no common governance system. We suggest that the goals of alignment toward sustainable development or so-called corporate sustainability are misguided and systemically depreciative, as they purport to sustain activities that foreseeably accelerate ecological degradation. We propose a modeling practice for stakeholder design of strongly sustainable enterprises for the intention of whole system flourishing across living ecosystems and organized social systems. This systemic design approach to business transformation functions at the level of the business model. We claim that business model design affords the highest leverage across all modes of organizing for collective cultural adoption ecosystemic practices.

What Counts as Evidence in Healthcare Service Design?

Evidence-Based Service Design

From Chapter5, Design for Care

Inpatients spend most of their time during hospital treatment and recovery in a bed, often in a shared room. It might be obvious to anyone who has even visited a hospital room that the experience of resting and waiting in this sterile, artificial environment could be greatly improved. The everyday experience of the inpatient constitutes encounters with the physical environment—from building architecture to room layout and furniture—and the presentation of services within a hospital and specialized clinics. Most of these environments leave a lot to be desired, and designed.

What if it could be proven that people heal demonstrably faster and better in beautiful, well cared for physical environments? Architecture professor Roger Ulrich’s research into the direct beneficial health effects of natural and harmonious surroundings during care provides hard evidence to support this theory.
Ulrich’s 1983 article “View Through a Window May Influence Recovery from Surgery” was the first scientific recognition of the capacity of nature to accelerate healing in the clinical environment.4 Taking a patient-centric perspective from the outset, Ulrich demonstrated that natural scenes and emotionally supportive artwork facilitate significantly faster recovery and shorter hospital stays. Access to real plants and natural scenery generated the best results. Artwork content, not quality, makes an empirical difference. Scenes of nature, placid landscapes, and animals measurably quickened the healing process.

Ulrich also formulated the theory of supportive design, proposing that healthcare environments can improve outcomes directly by promoting stress reduction, buffering, and coping. Chronic stress severely limits the rate of recovery and delays the release of patients in acute and critical care. Supportive design focuses on the physical and interactive characteristics of the hospital to reduce the occurrence or impact of stressors such as intrusions, constant noise, loud or unwanted sounds, and cluttered or sterile visual fields. There are a multitude of passive service touchpoints for increasing comfort, ease, and even intimacy. The conventional models of systems workflow in service processes can be detailed to show anticipated patient sensitivity and responses to touchpoints (both figurative and literal points).

Evidence for Design

Ulrich’s research originated EBD in the early 1980s, which started in hospital architecture and expanded to care processes, practice interventions, and device design. As in evidence-based medicine, the current “best evidence” available in literature and practice is recruited to support decision making, with the explicit goal of optimizing defined health outcomes in measurable terms.

Evidence-based medicine (EBM) relies on the highest appropriate standard of evidence for a clinical decision. The cultural and regulatory adoption of EBM has created an environment of risk and uncertainty reduction that demands a similar clinical standard of evidence for any research-based decision, including IT or services. For well-known diseases, the randomized clinical trial (with blind control and experimental conditions) represents the “gold standard” of evidence. Needless to say, that standard does not apply in design research. But it conditions the culture in which design decisions are made.

What is the appropriate level of research rigor for design decisions for health? A major goal of EBD is to ensure research is transparent and replicable, so that other institutions can learn from the study. A secondary aim is that of communicating the effectiveness of service design and research methodologies through institutional research, demonstrating validity and appropriate applications in the professional literature.
The interface of a health records system could be measured not only through formative usability testing but for its actual performance in clinical service. Progressive hospitals with strong informatics departments, such as the Children’s Hospital of Philadelphia, measure and track patient outcomes as a hard metric to determine the degree to which defined health measures are improved by their IT investment. If design research methods remain proprietary or applied only as commercial engagements, the value of design-led research will not be fulfilled on a wider scale.

EBD is not just the gathering of user research to inform design decisions, as in a user-centered design process. It is a rigorous equivalent to the careful application of scholarly evidence in informing care decisions, and generally includes the following activities:

  • Reviewing current and retrospective research to identify precedents, mature findings, and prescriptive guidance from relevant studies.
  • Prioritizing and balancing the literature basis with primary data collected from actual patient data, subject matter experts, and professional observations.
  • Advancing theories and hypotheses to support observations, and structuring evaluations to test outcomes of design decisions.
  • Measuring outcomes following implementation, and assessing theory validity and any gap between observations and hypotheses.

What Evidence Counts in Healthcare UX?

Few design disciplines face the risk position of architecture and device design in health institutions. Devices must be approved by FDA review committees, and architects certify their designs and are responsible for the plans followed by builders. Software user interfaces designed for standard operating systems are not subject to equivalent certification, and even EMR systems, for the management of data responsible for clinical decisions, are not certified by open review.

Does it make sense for UX design to adopt evidence-based principles in healthcare?
Design trade publications and UX blogs show growing interest in EBD. The typical claims made for evidence (largely qualitative) do not build credibility when evidence is merely based on user observations. Usability testing is an essential evaluation method, and is especially useful when applied across the installation lifecycle of health IT. Yet usability testing cannot measure health outcomes, but user inputs and responses to interactive systems. In a sociotechnical context, it is supplemental but not sufficient evidence. The quality of evidence for assessing interaction design relies on the behavioral measures of task completion, error rates, and time on task. User satisfaction and, occasionally, learning rate measures are collected as evidence for design decision making. Yet no websites make the claim of improving one’s life and health following the interaction.

Through iterative research on prototyped sociotechnical systems, better design can demonstrate outcomes based on measures of patient health. Measuring outcomes in response to service change requires longitudinal tracking of patients across cohorts large enough to measure an aggregate difference. This is unlike any process in Web or interaction design. It suggests a gap in health service design research methods. When lives, liability, and dollars are at risk, the due diligence of quantified evidence is necessary to ensure decisions are supportable across the organization.

Evidence for Design Outcomes

In medical practice, scholarship, care planning, and design decisions, “evidence” differs significantly—not only the types of evidence, but its definition and collection, quality evaluation, controls, presentation, and publication. For clinical decision making, typically accepted evidence ranges from randomized controlled trials (gold standard) to expert studies (weak), including a variety of types of evidence (observations, imaging, measured variables) relevant to diseases, biological responses, and applications to procedures, interventions, and public health.

Traditional operational studies rely on “outcome measures,” but the measures taken often assume a causality between interventions and outcomes—a causality that may not always exist in a complex reality. Some clinics invest heavily in biostatistics research to infer outcomes associated with interventions, a complex but increasingly necessary undertaking.

Evidence by type—from more to less rigorous—might include:

  1. Controlled human interaction experiments; mixed method studies (triangulated methods)
  2. Patient observations, physiological measures, field experiment data (strong empirical)
  3. Robust sampled ethnographic data; controlled usability interaction studies (empirical)
  4. Small sample interviews, “hard” (rigorous, evidence-based) case study, extrapolations from field research
  5. Expert opinion, heuristic or multiperspective assessment

With the growth of the Web and the number of people needed to build it, the UX field has expanded well beyond the original human factors community that started the field. The widespread adoption of “user experience” glossed over many of the original distinctive differences between practices.. Although this merger has gained a broader acceptance of the practice, the general acceptance is less relevant in high-hazard, high-reliability settings. Today research professionals constitute an ever-shrinking proportion of the field.  A different “standard of care” is necessary when designing a system for clinical professionals or patients, rather than for consumers.

EBD established a rigorous research methodology for healthcare applications understood in principle by practitioners. But not every informatics user interface or health website requires strong EBD validation. An insistence on EBD standards in the early stages of a design program could significantly inhibit the innovation value from exploratory research. Allow formative design and organizational learning to at least reach a stage of development where summative evaluation on prototypes and service concepts makes sense.

Edge Practices: How Do You Measure Value?

Every design discipline (even industrial design) has had to develop its best rationale for the question “why should we keep/hire/use you guys?” If they keep asking, “what is it you do again?” you may have more work to do more on strategic communication.

Emerging practices will always be considered marginal at first, within an organization. Practices (not processes) are formed from collective agreement, and are often tacitly formed.  When an organization first develops a core process, such as project management, it starts as a practice. Outcomes are uncertain, its value to cost is assessed. When a practice, such as user experience, becomes vital to every project, it becomes formalized as a process.

You can tell practices because they have small in-groups, communities of practice that may form to develop methods, ensure quality, share learning, as in nearly all user experience groups in the past.  (I can speak as having been the “first one” and developed UX capacity in several organizations).

This agreement of practice requires enrollment of participants over a period of time. You don’t “pitch” your own organization to accept your practice, usually, as the resulting win may be brittle. Marginal, edge practices have to win over internal clients by demonstrating value, extraordinary value.

I think its significant that most large organizations have no formalized design processes, they have – at best – practices in different locations. Yes, the major product companies have UX teams, but I’d remind (anyone) that Google had a very marginal design practice until only recently.

Systemic design is marginal as a discipline, as there’s no global practice community, no professional societies, no standards for method, no canon or agreed body of knowledge. This keeps things fun and lively, and as we’re holding the third symposium of Relating Systems Thinking and Design this coming week in Oslo, it reminds me of the early days of CHI (I was at the second conference, and it was about the number of people we have attending now).

RSD is an annual symposium of academics, students and advanced practitioners that embraces the discipline at the convergence of systemics and design thinking. We have a strong service design contingent, but it is not the same thing and we don’t leverage UX and service design methods as you might expect. Systemic design is a hybrid of design disciplines that draw on systems thinking – and as it matures its becoming its own practice.

Systemic Design in the Organization

In the years when we built custom information systems and software, we developed something of a systems design discipline that produced models for system structure, custom interface design, distribution, custom networking, even custom workstations. There were (and still are) system engineers. But system designers went away as the micro revolution put a PC, then a laptop on every desk. And while we all trained in systems analysis and design (for systems and software), only a rare few worked with system dynamics (operations research), soft systems, or system sketches.  These were used in government, labs and IBM, only a few advanced firms.  Now we mean something different by system design.

However, we have to train organizations to recognize this function as core to their business, and for the most part, its not recognizable to corporations.  But then, neither are ethnographers, and there is a thriving corporate ethnographic practice community.

So as with all design practices and a lot of research – job titles may vary. It can take time for the business case of a new competency to be valued, even if you develop a good rationale. And since we espouse team collaboration, when our leadership is successful our (internal) customers will believe they did the work themselves. What’s good for Lao Tze (who defined that as the essence of leadership 2500 years ago) it’s not going to actually help what we called in AT&T Bell Labs your “body count.”* You need metrics and measures (these ARE different, and you will need both).

*(Our manager was a wry Vietnam vet who reminded us that our internal metrics came from the same mentality that presented body counts in the 1970’s TV news to show how America was “winning” the war).

The Evolution of Edge Practices

With an edge practice you have to discover what your sponsors value and deliver “body counts” – numbers or values – that help them make their business case to their sponsors. Find out what your customers have to deliver on, and help them deliver on that promise. If you cannot find that fit, then you might have to find new customers.

Every design and creative discipline has this problem at first. My career followed that developmental arc of value creation. I started as a human factors MA, moved into software design, systems engineering design, and then web product design, combining research and design in each role. Much of this was delivering what the organization required, boring specifications and CASE systems analysis by the sheaf. Real design work was always vulnerable, as software was (and still is mostly) all about the engineering and shipping. While I always found ways to conduct user research, even if guerilla street testing, user engagement value didn’t count as much as you’d think – not until the Web 2.0 networked era and real-time analytics.

Human factors – with an experimental and evidence-based approach to system design – took a good decade to prove its value in US firms, and we had the ready metrics and measures to prove value. That’s what we did back then, hard-core behaviorist-style observational testing. (And even then, with its formidable research rigor, usability testing was exotic …  based on a failed internal project of mine to build a usability lab at a former employer, my company’s CEO wrote a million dollar check to his (and my) alma mater, University of Dayton, for a world-class usability lab. For a while, it was probably the best in the world, and so was the very definition of exotic – nobody really knew how to use its full capability. This was in 1991, while I was still in human factors grad school.) Within three years, I had built a competent lab for $20K. By the time we started building web products in 1995, we were already doing field usability instead. That’s a pretty rapid depreciation curve.

So this question will keep coming up in systemic design. When value is delivered by creating collaborative engagements across stakeholders, we have to understand how they value and measure collaboration. Are the outcomes better projects and programs, better strategies and planning, faster time to delivery, making the right decisions earlier? There are ways to show these values, but we can’t measure everything. You want to measure what sponsors value most, and demonstrate how your practices delivered that value.

(For a case study on the causes and recovery from total market failure, get We Tried to Warn You, there in the left column)