Two exhibits shall suffice, methinks.
Exhibit A: The Strange Case of the Oldest Homo Sapiens That Weren’t
The world went haywire last week with breathless reports that ZOMG ALL UR HOOMIN EVOLUSHUNS HAZ CHANGED!!1!11!! Back in the bad old days when all I had access to was the MSM, I might have gotten sucked in by the hype. After years of reading science blogs, though, I just sat back and waited.
And, sure enough, on December 28th, there was Brian Switek on Twitter, on the case:
Scientists claim 400,000 year old Homo sapiens teeth found in Israel cave, older than African H. sapiens BUT… [1/2]
… paper abstract draws closer comparison to Neanderthals and indeterminate types like the Skhul/Qafzeh hominins [2/2]
Haven’t read full paper – no access – but I have to wonder if the popular presentation is hyped beyond paper’s conclusions
Scicurious took care of the “no access” problem. And, within hours, Brian had taken a gargantuan pin to a very over-inflated balloon:
A handful of fossil teeth found in Israel’s Qesem Cave
, described in the American Journal of Physical Anthropology
and attributed to 400,000 year old members of our own species in multiple news reports, are said to rewrite the story of human evolution. This discovery doubles the antiquity of Homo sapiens
, the articles say, and identify a new point of origin for our species. “Find in Israeli cave may change evolution story
” proclaims The Australian
, while the Daily Mail
asks and answers “Did first humans come out of Middle East and not Africa? Israeli discovery forces scientists to re-examine evolution of modern man
.” (The Jerusalem Post
, by comparison, went with the tamer “Homo sapiens lived in Eretz Yisrael 400,000 years ago
.”) As is often the case, though, the hype surrounding this find far outstrips its actual significance.
Then, for good measure, Carl Zimmer piled on. I’m sure plenty of others leaped into the fray, but those were the first two I read, and suffice to prove my point. We bloody well need science bloggers. Why? Because science journalists, the supposed professionals, are so fucking busy misreading, misleading, and giving science repeated black eyes by taking carefully-hedged papers filled with cautions and bet-hedging and warnings against jumping to conclusions and hyping them out of all proportion. Carl called it “journalistic vaporware,” which rings true. It’s disgusting is what it is.
And it used to be a person had three choices: accept, discount, or pay out the nose to access the original scientific papers and then try to wrangle sense from things not written in layman’s language.
Now we have science bloggers, who know science, love science, speak its lingo, and have the access and the tools to investigate and report back to the rest of us. Because of them, I’ve learned not to believe the hype, and to refrain from hyperventilating until one of them weighs in.
Which brings me to Exhibit B: The “Placebo Effect” Effect
Twitter absolutely blew up with links to various and sundry about some paper claiming the placebo effect worked even when people knew they were taking a placebo. I can’t report on how some of the sources Bora linked were reporting it, because I was busy getting me arse kicked by other links (I’ve been busy, damn it). But I saw other babble floating around that seemed to take it at face value. Meh. It was either a solid study or it wasn’t, I wasn’t too fussed about it, and knew that if it either had merit or sucked leper donkey dick, it would eventually land on one of the medical sites that are part of my regular reading schedule.
I love it when I’m right:
Dr. Gorski eviscerates both study and breathless hype. Yes, the study showed some interesting things. No, it didn’t prove that placebo works sans deception. Rather a fatal flaw:
No, the reason I say this is because, all their claims otherwise notwithstanding, this study doesn’t really tell us anything new about placebo effects. The reason is that, even though they did tell their subjects that the sugar pills they were being given were inert, the investigators also used suggestion to convince their subjects that these pills could nonetheless induce powerful “mind-body” effects. In other words, the investigators did the very thing they claimed they weren’t doing; they deceived their subjects to induce placebo effects by exaggerating the strength of the evidence for placebo effects and using rather woo-ish terminology (“self-healing,” for instance). Here’s how the investigators describe what they told their patients:
Patients who gave informed consent and fulfilled the inclusion and exclusion criteria were randomized into two groups: 1) placebo pill twice daily or 2) no-treatment. Before randomization and during the screening, the placebo pills were truthfully described as inert or inactive pills, like sugar pills, without any medication in it. Additionally, patients were told that “placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes.” The patient-provider relationship and contact time was similar in both groups. Study visits occurred at baseline (Day 1), midpoint (Day 11) and completion (Day 21). Assessment questionnaires were completed by patients with the assistance of a blinded assessor at study visits.
This is a description of the script that practitioners were to use when discussing these pills with subjects recruited to the study:
Patients were randomly assigned either to open-label placebo treatment or to the no-treatment control. Prior to randomization, patients from both groups met either a physician (AJL) or nurse-practitioner (EF) and were asked whether they had heard of the “placebo effect.” Assignment was determined by practitioner availability. The provider clearly explained that the placebo pill was an inactive (i.e., “inert”) substance like a sugar pill that contained no medication and then explained in an approximately fifteen minute a priori script the following “four discussion points:” 1) the placebo effect is powerful, 2) the body can automatically respond to taking placebo pills like Pavlov’s dogs who salivated when they heard a bell, 3) a positive attitude helps but is not necessary, and 4) taking the pills faithfully is critical. Patients were told that half would be assigned to an open-label placebo group and the other half to a no-treatment control group. Our rationale had a positive framing with the aim of optimizing placebo response.
How is this any different from what is known about placebo responses? I, for one, couldn’t find anything different. It’s right there in the Methods section: The authors might well have told subjects that they were receiving a sugar pill, but they also told them that this sugar pill would do wonderful things through the power of “mind-body” effects, as though it was entirely scientifically clear-cut that it would.
That, my darlings, is deception. That’s telling folks the sugar pill is magic. And we all know that when you hand people a pill and tell them it’s magic, a subset will believe, and heal themselves. The only real difference was that in this study, the bottles of sugar pills actually said “PLACEBO” on them.
POP goes another overinflated response to a study that didn’t merit the hype. Thing is, without science bloggers, us regular joes wouldn’t know any better. And without science bloggers, us regular joes would be pissed next week or next month or next year when future studies are breathlessly hyped as saying these studies were completely fucking wrong. Science bloggers bring us back down to earth. They help us understand what the science was really saying, analyze the studies for flaws that might cast doubt on sensational conclusions, and show us how good science (and good science reporting!) is actually done.
That, my darlings, is why we so desperately need science bloggers. And to all of you science bloggers in the audience: thank you, thank you, a million times, thank you.