Rabu, 30 Maret 2016

After listening to a lecture, third-year students at the Harvard School of Dental Medicine were surveyed about distractions by electronic devices and given a 12-question quiz. Although 65% of the students admitted to having been distracted by emails, Facebook, and/or texting during the lecture, distracted students had an average score of 9.85 correct compared to 10.444 students who said they werent distracted. The difference was not significant, p = 0.652.

In their conclusion they authors said, "Those who were distracted during the lecture performed similarly in the post-lecture test to the non-distracted group."

The full text of the paper is available online. As an exercise, you may want to take a look at the paper and critique it yourself before reading my review. It will only take you a few minutes.

As you consider any research paper, you should ask yourself a number of questions such as are the journal and authors credible, were the methods appropriate, were there enough subjects, were the conclusions supported by the data, and do I believe the study?

Of course, many more questions could be included. Google "how to critique an article," and you will find numerous lengthy treatises on the subject.

The paper appears in PeerJ, a fairly new open access journal with a different format. Authors have to pay to have papers published, but they can opt for a reasonably priced plan for lifetime memberships with variable numbers of papers included.

It’s too new to have an impact factor but stats on the website state that the paper has had over 2,700 views and been downloaded 76 times.

The authors are from Harvard so they must be credible.

The study is described as quasi-experimental, meaning not randomized. That is not necessarily bad especially because it is said to be a pilot study too.

The main problem with the paper is that it was underpowered to detect a difference because there were only 26 subjects, 17 distracted and 9 not. The null hypothesis—that distractions do not affect test scores—was accepted as true, which is called a "Type II" error by statisticians.

Other issues with the paper are that distracting behaviors may have been underreported by the students, the test questions may have been too easy, and the two groups may have differed in their baseline knowledge of the material. Harvard dental students may not be representative of students or people in general. A couple of my colleagues on Twitter suggested that the lecture could have been either so good, or so bad, that paying total attention was unnecessary. PeerJ has a 70% acceptance rate for submissions.

Did I mention that one of the two authors of the paper is an "Academic Editor" for the journal?

Bottom line: This paper should not convince you that distractions by electronic devices are not harmful to learners.


Related Posts by Categories

0 komentar:

Posting Komentar