www.fgks.org   »   [go: up one dir, main page]

How to read health news

Behind the Headlines

Tuesday January 6 2009

Not everything in black and white makes sense

By Dr Alicia White

If you’ve just read a health-related headline that has caused you to spit out your morning coffee (“Coffee causes cancer” usually does the trick), it’s always best to follow the Blitz slogan: “Keep Calm and Carry On”. On reading further, you’ll often find the headline has left out something important, such as, “Injecting five rats with really highly concentrated coffee solution caused some changes in cells that might lead to tumours eventually (study funded by The Association of Tea Marketing)”.

The most important rule to remember is: don’t automatically believe the headline. It is there to draw you into buying the paper and reading the story. Would you read an article called, “Coffee pretty unlikely to cause cancer, but you never know”? Probably not.

To avoid spraying your newspaper with coffee in the future, you need to analyse the article to see what it says about the research it is reporting on. Bazian (the company I work for) has appraised hundreds of articles for Behind The Headlines on NHS Choices, and we’ve developed the following questions to help you figure out which articles you’re going to believe and which you’re not.

 

Does the article support its claims with scientific research?

Your first concern should be the research behind the news article. If an article touts a treatment or some aspect of your lifestyle that is supposed to prevent or cause a disease, but doesn’t give any information about the scientific research behind it, then treat it with a lot of caution. The same applies to research that has yet to be published.

 

Is the article based on a conference abstract?

Another area for caution is if the news article is based on a conference abstract. Research presented at conferences is often at a preliminary stage and usually hasn’t been scrutinised by experts in the field. Also, conference abstracts rarely provide full details about methods, making it difficult to judge how well the research was conducted. For these reasons, articles based on conference abstracts should be no cause for alarm. Don’t panic or rush off to your GP.

 

Was the research in humans?

Quite often, the “miracle cure” in the headline turns out to have only been tested on cells in the laboratory or on animals. These stories are regularly accompanied by pictures of humans, which creates the illusion that the miracle cure came from human studies. Studies in cells and animals are crucial first steps and should not be undervalued. However, many drugs that show promising results in cells in laboratories don’t work in animals, and many drugs that show promising results in animals don’t work in humans. If you read a headline about a drug or food “curing” rats, there is a chance it might cure humans in the future, but unfortunately a larger chance that it won’t. So there is no need to start eating large amounts of the “wonder food” featured in the article.

 

How many people did the research study include?

In general, the larger a study the more you can trust its results. Small studies may miss important differences because they lack statistical “power”, and are also more susceptible to finding things (including things that are wrong) purely by chance.

You can visualise this by thinking about tossing a coin. We know that if we toss a coin the chance of getting a head is the same as that of getting a tail – 50/50. However, if we didn’t know this and we tossed a coin four times and got three heads and one tail, we might conclude that getting heads was more likely than tails. But this chance finding would be wrong. If we tossed the coin 500 times - i.e. gave the experiment more "power" - we'd be more likely to get a heads/tails ratio close to 50/50, giving us a better idea of the true odds. When it comes to sample sizes, bigger is usually better. So when you see a study conducted in a handful of people, treat it with caution.

 

Did the study have a control group?

There are many different types of studies appropriate for answering different types of questions. If the question being asked is about whether a treatment or exposure has an effect or not, then the study needs to have a control group. A control group allows the researchers to compare what happens to people who have the treatment/exposure with what happens to people who don’t. If the study doesn’t have a control group, then it’s difficult to attribute results to the treatment or exposure with any level of certainty.

Also, it’s important that the control group is as similar to the treated/exposed group as possible. The best way to achieve this is to randomly assign some people to be in the treated/exposed group and some people to be in the control group. This is what happens in a randomised controlled trial (RCT) and is why RCTs are considered the “gold standard” for testing the effects of treatments and exposures. So when reading about a drug, food or treatment that is supposed to have an effect, you want to look for evidence of a control group, and ideally, evidence that the study was an RCT. Without either, retain some healthy scepticism.

 

Did the study actually assess what’s in the headline?

This one is a bit tricky to explain without going into a lot of detail about things called proxy outcomes. Instead, bear in mind this key point: the research needs to have examined what is being talked about in the headline and article (somewhat alarmingly, this isn’t always the case).

For example, you might read a headline that claims, “Tomatoes reduce the risk of heart attacks”. What you need to look for is evidence that the study actually looked at heart attacks. You might instead see that the study found that tomatoes reduce blood pressure. This means that someone has extrapolated that tomatoes must also have some impact on heart attacks, as high blood pressure is a risk factor for heart attacks. Sometimes these extrapolations will prove to be true, but other times they won’t. Therefore if a news story is focusing on a health outcome that was not examined by the research, treat it with a pinch of salt.

 

Who paid for and conducted the study?

This is a somewhat cynical point, but one that’s worth making. The majority of trials today are funded by manufacturers of the product being tested – be it a drug, vitamin cream or foodstuff. This means they have a vested interest in the results of the trial, which can potentially affect what the researchers find and report in all sorts of conscious and unconscious ways. This is not to say that all manufacturer-sponsored trials are unreliable. Many are very good. However, it’s worth seeing who funded the study to sniff out a potential conflict of interest.

 

Should you “shoot the messenger”?

Overblown claims might not necessarily be down to the news reporting itself. Although journalists can sometimes misinterpret a piece of research, at other times the researchers (or other interested parties) over-extrapolate, making claims their research doesn’t support. These claims are then repeated by the journalists.

Given that erroneous claims can come from a variety of places, don’t automatically assume they come from the journalist. Instead, use the questions above to figure out for yourself what you’re going to believe and what you’re not.

 

How can I find out more?

It’s not possible to cover all the questions that need to be asked about research studies in a short article, but we’ve covered some of the major ones. Visit some of the useful links above if you’re interested in finding out more.



Latest Headlines

Edited by NHS Choices

Comments are personal views. Any information they give has not been checked and may not be accurate.

MatthewXLY said on 13 February 2013

In response to Collapsibiltank, As a sufferer of ADD (Not ADHD, but they're related) I find it unfortunate that just because there's no "smoking gun" piece of evidence as to the cause, you're so quick to deny the existance of a very serious and debilitating mental illness.

Most mental illnesses are caused by enviromental factors, usually emotional trauma and therefore the piece of evidence you need may never come to light.

Unfortunately, I consistently find that the view you take is the prevailent one. Up until very recently, clinical depression and various anxiety disorders were treated in much the same way. And still are by too many.

I have to admit, I felt similar to you until these issues started effecting me. It is difficult to imagine what these people go through and trust me, you wouldn't want to know. But please, do not add insult to our injuries.

In relation to the actual article - rather worrying to find the HPA in among the stats!

Report this content as offensive or unsuitable

aller said on 14 November 2010

Thank you for this interesting article now when i read the newpaper i will will think about what i have read.
My daughter is training to be a nurse so will tell her about it.
makes me wonder should i change my paper from the daily mail.

Report this content as offensive or unsuitable

User492632 said on 14 October 2010

I think this applies to the recent newspaper articles which state that ADHD is a genetic disorder. The genetic link found was only tiny and it has been blown out of all proportion.
I also noticed the trial was done by a drug company who may have a vested interest in this so called disorder.
Until there is a clear cut scientific biological test which proves 100% that ADHD exists, I personally refuse to accept it as a medical condition

Report this content as offensive or unsuitable

collapsibeltank said on 12 May 2010

Having just surveryed all 37 Behind The Headlines articles in my RSS feed reader, I have frequencies for news sources appearing in Behind the Headlines...

Health Protection Agency - 1
Daily Mirror - 2
The Guardian - 2
The Times - 3
The Independent - 4
The Daily Telegraph - 5
BBC News - 5

THE DAILY MAIL - 15

So a quick check for a questionable article would be "Am I reading the Daily Mail"...

Report this content as offensive or unsuitable

robmat said on 08 February 2010

As a science journalist, I was glad to see the caveat about "not shooting the messenger". These days, the appearance of results in a well-regarded refereed journal is merely a necessary criterion for reliability - it certainly isn't sufficient.

You may want to amend your explanation of statistical power in terms of coin-tossing, as in its current form it's incorrect. As the number of coin-tosses increases, it's the _frequency_ of heads and tails that asymptotically approaches the theoretical value of 50-50 (at a rate proportional to the square-root of the number of tosses). Contrary to what your article states, the _absolute_number_ of heads and tails actually diverges from parity as the number of coin-tosses increases.

Report this content as offensive or unsuitable

Miracle cure or scam?

Will an online miracle cure provide the answer to your health problem?

CAM: what is evidence?

What is scientific evidence and how can you use it when deciding whether to use complementary and alternative medicine?

News: 'Half of all health news is spun'

Read about the September 2012 study that found over-optimism about scientific results from researchers, press offices and journalists alike

Stem cell research in the news

Stem cells are often portrayed in the media as a miracle cure for many serious conditions and disabilities. We analyse the facts behind theses stories

Preventing cancer

How to reduce your risk of cancer, including stopping smoking, eating a healthy diet and checking your body for changes.