Are EHRs (Electronic Health Records) Error Inducing Machines?
I believe we are “not yet there.” As long as the Medicare specification known as “meaningful use “criteria ignores design, usability, and the propensity for foreseeable error, we are not even close. The key quote from Sinsky illustrates the problem from the point of care and clinical use:
While Google Desktop Gadgets make access to information as unfettered as possible, HIT systems often sequester individual tidbits of information at the ends of nonintuitive labyrinths, with needless hurdles along the way. Clinicians need clear access to priority information, not an obstacle course. As an EHR user, I have to keep a thought in mind until, five clicks and two screens later, I can find related information. Then, I have to park all of this developing thought for four more clicks, three screens and a slow download until I get to the screen where I can take action. Repeat this process for dozens of tasks per patient and you have an environment that facilitates error and inefficiency.
However, in terms of her design process recommendations, I’d take issue with the Big Box brand names she recommends EHRs should listen to:
So let’s get Google on it, and bring in a Toyota mindset while we are at it. Toyota understands the need for lean, streamlined workflow. Toyota understands that even a minor unnecessary step that causes worker fatigue or extra time is a problem. Many steps in HIT are redundant. They cause worker fatigue and monopolize time.
No, please – Google’s information interfaces are wretched, even if they have the bomb back end. They are engineers with no design sense. The better example is actually Microsoft, as much as I used to dismiss them. They have done their homework on usability, reliability, and information integration. Google relies on goodwill and monopoly – they are becoming the new Microsoft.
And I’m not sure Toyota’s lean or other processes are representative of the high-reliability processes needed in healthcare. Redundancy and multiple touchpoints are effective in HRO systems; We need to make sure busy clinical staff don’t miss things. Clinical work requires attention to distributed cognition as much or more than individual.
Attention Overload and the Overabundance of Healthcare Information
The usability gap is not just with EHRs of course. The web does not automatically make things better. Every informatics vendor in healthcare services has a different interface, a different information model, and different applications for their resources. Where a patient can use the Mayo Clinic, Web MD, e-Cleveland Clinic, Everyday Health, and so forth, doctors have dozens of specialized resources available online through the hospital’s library or intranet. But they may be too busy to use the resources available.
These days I actually find fewer and fewer senior physicians (that I speak with) using smartphone for clinical information. Typically I hear they use “ePocrates” or another drug search tool, but that’s it. Doctors, like most professionals, are overwhelmed with data and with packaged information as well. My current book chapters (the ones I’m writing now) are dealing with these issues. Residents, even senior residents, are in training and will use online resources more. Senior clinicians are not using the plethora – and there is quite a plethora – of resources as much.
Where is all this going?
It will get worse first. The next 3 years will see a shakeout of everything in the professional marketplace. I can already get the clinical handheld version of Medscape as a free app on the iPhone – and maybe residents would use that. But the mere provision of “easy to use” clinical information on the web and phones is not the problem. I would not be surprised to see a revolt of some sort, a lockdown of apps in the institution so that a small number of preferred resources are used.
The VA medical centers have done this for some time on their CPRS menu for CDSS (Clinical Decision Support). Clinicians need only focus on a few supported resources, although new ones can be requested and installed at different centers (or regional VISNs).
The medical literature is even worse to navigate. Senior medical experts and clinical researchers have an increasingly difficult time getting to the articles that matter. Publication inflation and the proliferation of less-than-citable journals have allowed for a huge growth of mediocre research that clutters attention and bandwidth.
For two years or so I’ve been working on a process with U of Toronto’s Dr. Peter Pennefather to address this problem. We call this system the Interpretive Collaborative Review, which is currently a rough prototype. The ICR process is designed to guide collaborative profiling, scoring, and recording of jointly perceived significance associated with online publications. The goal is to make explicit a group’s perspective and understanding of the value of claims presented in these publications.
We address the problem of an overabundance of published and Internet accessible health science research information. The overloaded research ecosystem presents a major challenge for identifying a subset of that information for adequately informing specific healthcare choices. This challenge is further complicated when accounting for varying or multiple perspectives in the governance of those choices.
We consider this a problem best resolved by collaborative sensemaking (for more, see prior post).