Heart Smarter for Women – by Dr. Jennifer Mieres

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Dr. Mieres takes us through understanding our own heart disease risks as individuals rather than as averages. As the title suggests, she does assume a female readership, so if you are a man and have no female loved ones, this might not be the book for you. But aside from that, she walks us through examining risk in the context of age, other health conditions, lifestyle factors, and so forth—including not turning a blind eye to factors that might intersect, such as for example if a physical condition reduces how much we can exercise, or if there’s some reason we can’t follow the usual gold standard of heart-healthy diet.

On which note, she does offer dietary advice, including information around recipes, meal-planning, and what things to always have in stock, as well as what things matter the most when it comes to what and how we eat.

It’s not all lifestyle medicine though; Dr. Mieres gives due attention to many of the medications available for heart health issues—and the pros and cons of these.

The style of the book is very simple and readable pop-science, without undue jargon, and with a generous glossary. As with many books of this genre, it does rely on (presumably apocryphal) anecdotes, though an interesting choice for this book is that it keeps a standing cast of four recurring characters, each to represent a set of circumstances and illustrate how certain things can go differently for different people, with different things then being needed and/or possible. Hopefully, any given reader will find themself represented at least moderately well somewhere in or between these four characters.

Bottom line: this is a very informative and accessible book, that demystifies a lot of common confusions around heart health.

Click here to check out Heart Smarter For Women, and take control of your health!

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • Body by Science – by Dr. Doug McGuff & John Little
  • The Dopamine Precursor And More
    N-Acetyl L-Tyrosine (NALT) is an amino acid used by the body to make neurotransmitters like dopamine and norepinephrine. It can enhance cognitive performance in stressful situations.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • The Wandering Mind – by Dr. Michael Corballis

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Our mind’s tendency to wander can be a disability, but could it also be a superpower? Dr. Corballis makes the case for such.

    While many authors focus on, well, how to focus, Dr. Corballis argues in this book that our wandering imagination can be more effective at problem-solving and creative tasks, than a focused, blinkered mind.

    The book’s a quick read (184 pages of quite light reading), and yet still quite dense with content. He takes us on a tour of the brain, theory of mind, the Default Mode Network (where a lot of the brain’s general ongoing organization occurs), learning, memory, forgetting, and creativity.

    Furthermore, he cites (and explains) studies showing what kinds of “breaks” from mental work allow the wandering mind to do its thing at peak efficiency, and what kinds of breaks are counterproductive. Certainly this has practical applications for all of us!

    Bottom line: if you’d like to be less frustrated by your mind’s tendency to wander, this is a fine book to show how to leverage that trait to your benefit.

    Click here to check out The Wandering Mind, and set yours onto more useful tracks!

    Share This Post

  • How to Think Like Leonardo da Vinci – by Michael J. Gelb

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Authors often try to bring forward the best minds of the distant past, and apply them to today’s world. One could fill a library with business advice adaptations from Sun Tzu’s Art of War alone, same goes for Miyamoto Musashi’s Book of Five Rings, and let’s not get started on Niccolò Machiavelli. What makes this book different?

    Michael Gelb explores the principles codified and used by the infamous Renaissance Man to do exactly what he did: pretty much everything. Miyamoto Musashi had no interest in business, but Leonardo da Vinci really did care a lot about learning, creating, problem-solving, human connections, and much more. And best of all, he took notes. So many notes, for himself, of which we now enjoy the benefit.

    How To Think Like Leonardo da Vinci explores these notes and their application by the man himself, and gives real, practical examples of how you can (and why you should) put them into action in your daily life, no matter whether you are a big business CEO or a local line cook or a reclusive academic, Leonardo has lessons for you.

    See today’s book on Amazon!

    Share This Post

  • The Anti-Stress Herb That Also Fights Cancer

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    What does Rhodiola rosea actually do, anyway?

    Rhodiola rosea (henceforth, “rhodiola”) is a flowering herb whose roots have adaptogenic properties.

    In the cold, mountainous regions of Europe and Asia where it grows, it has been used in herbal medicine for centuries to alleviate anxiety, fatigue, and depression.

    What does the science say?

    Well, let’s just say the science is more advanced than the traditional use:

    ❝In addition to its multiplex stress-protective activity, Rhodiola rosea extracts have recently demonstrated its anti-aging, anti-inflammation, immunostimulating, DNA repair and anti-cancer effects in different model systems❞

    ~ Li et al. (2017)

    Nor is how it works a mystery, as the same paper explains:

    ❝Molecular mechanisms of Rhodiola rosea extracts’s action have been studied mainly along with one of its bioactive compounds, salidroside. Both Rhodiola rosea extracts and salidroside have contrasting molecular mechanisms on cancer and normal physiological functions.

    For cancer, Rhodiola rosea extracts and salidroside inhibit the mTOR pathway and reduce angiogenesis through down-regulation of the expression of HIF-1α/HIF-2α.

    For normal physiological functions, Rhodiola rosea extracts and salidroside activate the mTOR pathway, stimulate paracrine function and promote neovascularization by inhibiting PHD3 and stabilizing HIF-1α proteins in skeletal muscles❞

    ~ Ibid.

    And, as for the question of “do the supplements work?”,

    ❝In contrast to many natural compounds, salidroside is water-soluble and highly bioavailable via oral administration❞

    ~ Ibid.

    And as to how good it is:

    ❝Rhodiola rosea extracts and salidroside can impose cellular and systemic benefits similar to the effect of positive lifestyle interventions to normal physiological functions and for anti-cancer❞

    ~ Ibid.

    Source: Rhodiola rosea: anti-stress, anti-aging, and immunostimulating properties for cancer chemoprevention

    But that’s not all…

    We can’t claim this as a research review if we only cite one paper (even if that paper has 144 citations of its own), and besides, it didn’t cover all the benefits yet!

    Let’s first look at the science for the “traditional use” trio of benefits:

    When you read those, what are your first thoughts?

    Please don’t just take our word for things! Reading even just the abstracts (summaries) at the top of papers is a very good habit to get into, if you don’t have time (or easy access) to read the full text.

    Reading the abstracts is also a very good way to know whether to take the time to read the whole paper, or whether it’s better to skip onto a different one.

    • Perhaps you noticed that the paper we cited for anxiety was quite a small study.
      • The fact is, while we found mountains of evidence for rhodiola’s anxiolytic (antianxiety) effects, they were all small and/or animal studies. So we picked a human study and went with it as illustrative.
    • Perhaps you noticed that the paper we cited for fatigue pertained mostly to stress-related fatigue.
      • This, we think, is a feature not a bug. After all, most of us experience fatigue because of the general everything of life, not because we just ran a literal marathon.
    • Perhaps you noticed that the paper we cited for depression said it didn’t work as well as sertraline (a very common pharmaceutical SSRI antidepressant).
      • But, it worked almost as well and it had far fewer adverse effects reported. Bear in mind, the side effects of antidepressants are the reason many people avoid them, or desist in taking them. So rhodiola working almost as well as sertraline for far fewer adverse effects, is quite a big deal!

    Bonus features

    Rhodiola also putatively offers protection against Alzheimer’s disease, Parkinson’s disease, and cerebrovascular disease in general:

    Rosenroot (Rhodiola): Potential Applications in Aging-related Diseases

    It may also be useful in the management of diabetes (types 1 and 2), but studies so far have only been animal studies, and/or in vitro studies. Here are two examples:

    1. Antihyperglycemic action of rhodiola-aqeous extract in type 1 diabetic rats
    2. Evaluation of Rhodiola crenulata and Rhodiola rosea for management of type 2 diabetes and hypertension

    How much to take?

    Dosages have varied a lot in studies. However, 120mg/day seems to cover most bases. It also depends on which of rhodiola’s 140 active compounds a particular benefit depends on, though salidroside and rosavin are the top performers.

    Where to get it?

    As ever, we don’t sell it (or anything else) but here’s an example product on Amazon.

    Enjoy!

    Share This Post

Related Posts

  • Body by Science – by Dr. Doug McGuff & John Little
  • The Anti-Stress Herb That Also Fights Cancer

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    What does Rhodiola rosea actually do, anyway?

    Rhodiola rosea (henceforth, “rhodiola”) is a flowering herb whose roots have adaptogenic properties.

    In the cold, mountainous regions of Europe and Asia where it grows, it has been used in herbal medicine for centuries to alleviate anxiety, fatigue, and depression.

    What does the science say?

    Well, let’s just say the science is more advanced than the traditional use:

    ❝In addition to its multiplex stress-protective activity, Rhodiola rosea extracts have recently demonstrated its anti-aging, anti-inflammation, immunostimulating, DNA repair and anti-cancer effects in different model systems❞

    ~ Li et al. (2017)

    Nor is how it works a mystery, as the same paper explains:

    ❝Molecular mechanisms of Rhodiola rosea extracts’s action have been studied mainly along with one of its bioactive compounds, salidroside. Both Rhodiola rosea extracts and salidroside have contrasting molecular mechanisms on cancer and normal physiological functions.

    For cancer, Rhodiola rosea extracts and salidroside inhibit the mTOR pathway and reduce angiogenesis through down-regulation of the expression of HIF-1α/HIF-2α.

    For normal physiological functions, Rhodiola rosea extracts and salidroside activate the mTOR pathway, stimulate paracrine function and promote neovascularization by inhibiting PHD3 and stabilizing HIF-1α proteins in skeletal muscles❞

    ~ Ibid.

    And, as for the question of “do the supplements work?”,

    ❝In contrast to many natural compounds, salidroside is water-soluble and highly bioavailable via oral administration❞

    ~ Ibid.

    And as to how good it is:

    ❝Rhodiola rosea extracts and salidroside can impose cellular and systemic benefits similar to the effect of positive lifestyle interventions to normal physiological functions and for anti-cancer❞

    ~ Ibid.

    Source: Rhodiola rosea: anti-stress, anti-aging, and immunostimulating properties for cancer chemoprevention

    But that’s not all…

    We can’t claim this as a research review if we only cite one paper (even if that paper has 144 citations of its own), and besides, it didn’t cover all the benefits yet!

    Let’s first look at the science for the “traditional use” trio of benefits:

    When you read those, what are your first thoughts?

    Please don’t just take our word for things! Reading even just the abstracts (summaries) at the top of papers is a very good habit to get into, if you don’t have time (or easy access) to read the full text.

    Reading the abstracts is also a very good way to know whether to take the time to read the whole paper, or whether it’s better to skip onto a different one.

    • Perhaps you noticed that the paper we cited for anxiety was quite a small study.
      • The fact is, while we found mountains of evidence for rhodiola’s anxiolytic (antianxiety) effects, they were all small and/or animal studies. So we picked a human study and went with it as illustrative.
    • Perhaps you noticed that the paper we cited for fatigue pertained mostly to stress-related fatigue.
      • This, we think, is a feature not a bug. After all, most of us experience fatigue because of the general everything of life, not because we just ran a literal marathon.
    • Perhaps you noticed that the paper we cited for depression said it didn’t work as well as sertraline (a very common pharmaceutical SSRI antidepressant).
      • But, it worked almost as well and it had far fewer adverse effects reported. Bear in mind, the side effects of antidepressants are the reason many people avoid them, or desist in taking them. So rhodiola working almost as well as sertraline for far fewer adverse effects, is quite a big deal!

    Bonus features

    Rhodiola also putatively offers protection against Alzheimer’s disease, Parkinson’s disease, and cerebrovascular disease in general:

    Rosenroot (Rhodiola): Potential Applications in Aging-related Diseases

    It may also be useful in the management of diabetes (types 1 and 2), but studies so far have only been animal studies, and/or in vitro studies. Here are two examples:

    1. Antihyperglycemic action of rhodiola-aqeous extract in type 1 diabetic rats
    2. Evaluation of Rhodiola crenulata and Rhodiola rosea for management of type 2 diabetes and hypertension

    How much to take?

    Dosages have varied a lot in studies. However, 120mg/day seems to cover most bases. It also depends on which of rhodiola’s 140 active compounds a particular benefit depends on, though salidroside and rosavin are the top performers.

    Where to get it?

    As ever, we don’t sell it (or anything else) but here’s an example product on Amazon.

    Enjoy!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Why rating your pain out of 10 is tricky

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    “It’s really sore,” my (Josh’s) five-year-old daughter said, cradling her broken arm in the emergency department.

    “But on a scale of zero to ten, how do you rate your pain?” asked the nurse.

    My daughter’s tear-streaked face creased with confusion.

    “What does ten mean?”

    “Ten is the worst pain you can imagine.” She looked even more puzzled.

    As both a parent and a pain scientist, I witnessed firsthand how our seemingly simple, well-intentioned pain rating systems can fall flat.

    altanaka/Shutterstock

    What are pain scales for?

    The most common scale has been around for 50 years. It asks people to rate their pain from zero (no pain) to ten (typically “the worst pain imaginable”).

    This focuses on just one aspect of pain – its intensity – to try and rapidly understand the patient’s whole experience.

    How much does it hurt? Is it getting worse? Is treatment making it better?

    Rating scales can be useful for tracking pain intensity over time. If pain goes from eight to four, that probably means you’re feeling better – even if someone else’s four is different to yours.

    Research suggests a two-point (or 30%) reduction in chronic pain severity usually reflects a change that makes a difference in day-to-day life.

    But that common upper anchor in rating scales – “worst pain imaginable” – is a problem.

    Doctor holds hands of an elderly woman in a hospital bed.
    People usually refer to their previous experiences when rating pain. sasirin pamai/Shutterstock

    A narrow tool for a complex experience

    Consider my daughter’s dilemma. How can anyone imagine the worst possible pain? Does everyone imagine the same thing? Research suggests they don’t. Even kids think very individually about that word “pain”.

    People typically – and understandably – anchor their pain ratings to their own life experiences.

    This creates dramatic variation. For example, a patient who has never had a serious injury may be more willing to give high ratings than one who has previously had severe burns.

    “No pain” can also be problematic. A patient whose pain has receded but who remains uncomfortable may feel stuck: there’s no number on the zero-to-ten scale that can capture their physical experience.

    Increasingly, pain scientists recognise a simple number cannot capture the complex, highly individual and multifaceted experience that is pain.

    Who we are affects our pain

    In reality, pain ratings are influenced by how much pain interferes with a person’s daily activities, how upsetting they find it, their mood, fatigue and how it compares to their usual pain.

    Other factors also play a role, including a patient’s age, sex, cultural and language background, literacy and numeracy skills and neurodivergence.

    For example, if a clinician and patient speak different languages, there may be extra challenges communicating about pain and care.

    Some neurodivergent people may interpret language more literally or process sensory information differently to others. Interpreting what people communicate about pain requires a more individualised approach.

    Impossible ratings

    Still, we work with the tools available. There is evidence people do use the zero-to-ten pain scale to try and communicate much more than only pain’s “intensity”.

    So when a patient says “it’s eleven out of ten”, this “impossible” rating is likely communicating more than severity.

    They may be wondering, “Does she believe me? What number will get me help?” A lot of information is crammed into that single number. This patient is most likely saying, “This is serious – please help me.”

    In everyday life, we use a range of other communication strategies. We might grimace, groan, move less or differently, use richly descriptive words or metaphors.

    Collecting and evaluating this kind of complex and subjective information about pain may not always be feasible, as it is hard to standardise.

    As a result, many pain scientists continue to rely heavily on rating scales because they are simple, efficient and have been shown to be reliable and valid in relatively controlled situations.

    But clinicians can also use this other, more subjective information to build a fuller picture of the person’s pain.

    How can we communicate better about pain?

    There are strategies to address language or cultural differences in how people express pain.

    Visual scales are one tool. For example, the “Faces Pain Scale-Revised” asks patients to choose a facial expression to communicate their pain. This can be particularly useful for children or people who aren’t comfortable with numeracy and literacy, either at all, or in the language used in the health-care setting.

    A vertical “visual analogue scale” asks the person to mark their pain on a vertical line, a bit like imagining “filling up” with pain.

    A horizontal bar ranging from green at one end to red at the other, with different smiley faces underneath.
    Modified visual scales are sometimes used to try to overcome communication challenges. Nenadmil/Shutterstock

    What can we do?

    Health professionals

    Take time to explain the pain scale consistently, remembering that the way you phrase the anchors matters.

    Listen for the story behind the number, because the same number means different things to different people.

    Use the rating as a launchpad for a more personalised conversation. Consider cultural and individual differences. Ask for descriptive words. Confirm your interpretation with the patient, to make sure you’re both on the same page.

    Patients

    To better describe pain, use the number scale, but add context.

    Try describing the quality of your pain (burning? throbbing? stabbing?) and compare it to previous experiences.

    Explain the impact the pain is having on you – both emotionally and how it affects your daily activities.

    Parents

    Ask the clinician to use a child-suitable pain scale. There are special tools developed for different ages such as the “Faces Pain Scale-Revised”.

    Paediatric health professionals are trained to use age-appropriate vocabulary, because children develop their understanding of numbers and pain differently as they grow.

    A starting point

    In reality, scales will never be perfect measures of pain. Let’s see them as conversation starters to help people communicate about a deeply personal experience.

    That’s what my daughter did — she found her own way to describe her pain: “It feels like when I fell off the monkey bars, but in my arm instead of my knee, and it doesn’t get better when I stay still.”

    From there, we moved towards effective pain treatment. Sometimes words work better than numbers.

    Joshua Pate, Senior Lecturer in Physiotherapy, University of Technology Sydney; Dale J. Langford, Associate Professor of Pain Management Research in Anesthesiology, Weill Cornell Medical College, Cornell University, and Tory Madden, Associate Professor and Pain Researcher, University of Cape Town

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • How do science journalists decide whether a psychology study is worth covering?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

    Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

    But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

    Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

    The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

    University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

    But there’s nuance to the findings, the authors note.

    “I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

    Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

    Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

    “Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

    “This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

    More on the study’s findings

    The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

    “As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

    Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

    The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

    Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

    Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

    “Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

    Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

    For instance, one of the vignettes reads:

    “Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

    In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

    Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

    Considering statistical significance

    When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

    Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

    “Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

    Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

    In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

    • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
    • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
    • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
    • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

    Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

    What other research shows about science journalists

    A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

    A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

    More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

    Advice for journalists

    We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

    1. Examine the study before reporting it.

    Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

    Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

    How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

    Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

    “Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

    Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

    Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

    Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

    2. Zoom in on data.

    Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

    What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

    But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

    How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

    Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

    3. Talk to scientists not involved in the study.

    If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

    Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

    4. Remember that a single study is simply one piece of a growing body of evidence.

    “I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

    Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

    Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

    “We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

    5. Remind readers that science is always changing.

    “Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

    Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

    Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

    The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

    Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

    Additional reading

    Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
    Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

    The Problem with Psychological Research in the Media
    Steven Stosny. Psychology Today, September 2022.

    Critically Evaluating Claims
    Megha Satyanarayana, The Open Notebook, January 2022.

    How Should Journalists Report a Scientific Study?
    Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

    What Journalists Get Wrong About Social Science: Full Responses
    Brian Resnick. Vox, January 2016.

    From The Journalist’s Resource

    8 Ways Journalists Can Access Academic Research for Free

    5 Things Journalists Need to Know About Statistical Significance

    5 Common Research Designs: A Quick Primer for Journalists

    5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

    Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

    What’s Standard Deviation? 4 Things Journalists Need to Know

    This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: