Minggu, 01 Mei 2016

Recognition

The following is based on an actual case that occurred a long time ago in a galaxy far, far away.

A 65-year-old man arrived in the emergency department by ambulance after being found unresponsive. His respiratory rate was 40/minute, heart rate was 170/minute, and temperature was 102.2°. He did not respond to Narcan or an ampule of 50% dextrose. Blood sugar was 600 mg/dL. The diagnosis of diabetic ketoacidosis was made. IV fluids and an insulin drip were given. After some hydration he became more alert and complained of abdominal pain. On examination, his abdomen was tender to palpation. Four hours after arrival, a surgical consultant was called and diagnosed an incarcerated inguinal hernia. Before the patient could be taken to surgery, he suffered a cardiac arrest and could not be resuscitated. Review of the case revealed that although blood cultures were drawn and were eventually positive, antibiotics had not been ordered.

What happened? The possibility that this patient was septic never occurred to the doctors managing the case. I am sure that if a scenario like this appeared on a test, those doctors would have immediately chosen the right antibiotics. Some doctors are "book smart" but cant deal with a real live patient.

Although the doctors didnt do a very thorough abdominal exam at first, the real problem here was recognition.

I was reminded of this case by a recent article about a 2013 paper that appeared in a journal called Human Factors. The paper, "The Effectiveness Of Airline Pilot Training for Abnormal Events," pointed out that pilots doing their periodic training know that certain crises—stalls, low-level wind shear, engine failures on takeoff—are part of every simulator session and will occur in predictable ways.

The authors presented those situations in unexpected ways, measured pilots reactions, and found that experienced pilots responded less skillfully.

From the paper: Our control conditions demonstrate that pilots’ abilities to respond to the “schoolhouse” versions of each abnormal event were in fine fettle. The problems that arose when the abnormal events were presented outside of the familiar contexts used in training demonstrate a failure of these skills to generalize to other situations.

They suggested four ways to improve training and testing.

1) Change it up. In other words, dont practice things the same way every time.

2) Train for surprise.

3) Turn off the automation. Dont let the pilots depend on automated systems to help them recognize what is going on because if those systems fail, pilots will have trouble dealing with the situation.

4) Reevaluate the idea of teaching to the test which can "present the illusion that real learning has taken place when in fact it has not."

Item #3 is particularly relevant because of some recent interest in the negative effects that automation is having on pilots and possibly society in general. The 2009 crash of an Air France plane into the South Atlantic Ocean has been analyzed in several recent publications. (Here and here)

The cockpit voice recorder transcript is chilling. In a storm, the autopilot failed, and the plane stalled. Three pilots failed to recognize what happened and did all the wrong things.

I have been saying for years that we need to teach med students and residents how to think. Recognition of rare events would be a good area to focus on.

Related Posts by Categories

0 komentar:

Posting Komentar