How do science journalists decide whether a psychology study is worth covering?

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

But there’s nuance to the findings, the authors note.

“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

More on the study’s findings

The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

For instance, one of the vignettes reads:

“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

Considering statistical significance

When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

  • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
  • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
  • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
  • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

What other research shows about science journalists

A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

Advice for journalists

We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

1. Examine the study before reporting it.

Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

2. Zoom in on data.

Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

3. Talk to scientists not involved in the study.

If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

4. Remember that a single study is simply one piece of a growing body of evidence.

“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

5. Remind readers that science is always changing.

“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

Additional reading

Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.

Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.

How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.

From The Journalist’s Resource

8 Ways Journalists Can Access Academic Research for Free

5 Things Journalists Need to Know About Statistical Significance

5 Common Research Designs: A Quick Primer for Journalists

5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

What’s Standard Deviation? 4 Things Journalists Need to Know

This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • How Primary Care Is Being Disrupted: A Video Primer
  • Thinking of using an activity tracker to achieve your exercise goals? Here’s where it can help – and where it probably won’t
    Activity trackers are great for monitoring your physical activity, but you don’t need all the fancy features. Just focus on steps and activity minutes.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Reishi Mushrooms: Which Benefits Do They Really Have?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Reishi Mushrooms

    Another Monday Research Review, another mushroom! If we keep this up, we’ll have to rename it “Mushroom Monday”.

    But, there’s so much room for things to say, and these are fun guys to write about, as we check the science for any spore’ious claims…

    Why do people take reishi?

    Popular health claims for the reishi mushroom include:

    • Immune health
    • Cardiovascular health
    • Protection against cancer
    • Antioxidant qualities
    • Reduced fatigue and anxiety

    And does the science agree?

    Let’s take a look, claim by claim:

    Immune health

    A lot of research for this has been in vitro (ie, with cell cultures in labs), but promising, for example:

    Immunomodulating Effect of Ganoderma lucidum (Lingzhi) and Possible Mechanism

    (that is the botanical name for reishi, and the Chinese name for it, by the way)

    That’s not to say there are no human studies though; here it was found to boost T-cell production in stressed athletes:

    Effect of Ganoderma lucidum capsules on T lymphocyte subsets in football players on “living high-training low”

    Cardiovascular health

    Here we found a stack of evidence for statistically insignificant improvements in assorted measures of cardiovascular health, and some studies where reishi did not outperform placebo.

    Because the studies were really not that compelling, instead of taking up room (and your time) with them, we’re going to move onto more compelling, exciting science, such as…

    Protection against cancer

    There’s a lot of high quality research for this, and a lot of good results. The body of evidence here is so large that even back as far as 2005, the question was no longer “does it work” or even “how does it work”, but rather “we need more clinical studies to find the best doses”. Researchers even added:

    ❝At present, lingzhi is a health food supplement to support cancer patients, yet the evidence supporting the potential of direct in vivo anticancer effects should not be underestimated.❞

    ~ Yuen et al.

    Check it out:

    Anticancer effects of Ganoderma lucidum: a review of scientific evidence

    Just so you know we’re not kidding about the weight of evidence, let’s drop a few extra sources:

    By the way, we shortened most of those titles for brevity, but almost all of the continued with “by” followed by a one-liner of how it does it.

    So it’s not a “mysterious action” thing, it’s a “this is a very potent medicine and we know how it works” thing.

    Antioxidant qualities

    Here we literally only found studies to say no change was found, one that found a slight increase of antioxidant levels in urine. It’s worth noting that levels of a given thing (or its metabolites, in the case of some things) in urine are often quite unhelpful regards knowing what’s going on in the body, because we get to measure only what the body lost, not what it gained/kept.

    So again, let’s press on:

    Reduced fatigue and anxiety

    Most of the studies for this that we could find pertained to health-related quality of life for cancer patients specifically, so (while they universally give glowing reports of reishi’s benefits to health and happiness of cancer patients), that’s a confounding factor when it comes to isolating its effects on reduction of fatigue and anxiety in people without cancer.

    Here’s one that looked at it in the case of reduction of fatigue, anxiety, and other factors, in patients without cancer (but with neurathenia), in which they found it was “significantly superior to placebo with respect to the clinical improvement of symptoms”.

    Summary:

    • Reishi mushroom’s anti-cancer properties are very, very clear
    • There is also good science to back immune health claims
    • It also has been found to significantly reduce fatigue and anxiety in unwell patients (we’d love to see more studies on its benefits in otherwise healthy people, though)

    Share This Post

  • The Blue Zones Kitchen – by Dan Buettner

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    We’ve previously reviewed Buettner’s other book, The Blue Zones: 9 Lessons For Living Longer From The People Who’ve Lived The Longest, and with this one, it’s now time to focus on the dietary aspect.

    As the title and subtitle promises, we get 100 recipes, inspired by Blue Zone cuisines. The recipes themselves have been tweaked a little for maximum healthiness, eliminating some ingredients that do crop up in the Blue Zones but are exceptions to their higher average healthiness rather than the rule.

    The recipes are arranged by geographic zone rather than by meal type, so it might take a full read-through before knowing where to find everything, but it makes it a very enjoyable “coffee-table book” to browse, as well as being practical in the kitchen. The ingredients are mostly easy to find globally, and most can be acquired at a large supermarket and/or health food store. In the case of substitutions, most are obvious, e.g. if you don’t have wild fennel where you are, use cultivated, for example.

    In the category of criticism, it appears that Buettner is very unfamiliar with spices, and so has skipped them almost entirely. We at 10almonds could never skip them, and heartily recommend adding your own spices, for their health benefits and flavors. It may take a little experimentation to know what will work with what recipes, but if you’re accustomed to cooking with spices normally, it’s unlikely that you’ll err by going with your heart here.

    Bottom line: we’d give this book a once-over for spice additions, but aside from that, it’s a fine book of cuisine-by-location cooking.

    Click here to check out The Blue Zones Kitchen, and get cooking into your own three digits!

    Share This Post

  • Eat to Live – by Dr. Joel Fuhrman

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    It sure would be great if we could eat all that we wanted, and remain healthy without putting on weight.

    That’s the main intent of Dr. Joel Fuhrman’s book, with some caveats:

    • His diet plan gives unlimited amounts of some foods, while restricting others
    • With a focus on nutrient density, he puts beans and legumes into the “eat as much as you want” category, and grains (including whole grains) into the “restrict” category

    This latter is understandable for a weight-loss diet (as the book’s subtitle promises). The question then is: will it be sustainable?

    Current scientific consensus holds for “whole grains are good and an important part of diet”. It does seem fair that beans and legumes should be able to replace grains, for grains’ carbohydrates and fiber.

    However, now comes the double-edged aspect: beans and legumes contain more protein than grains. So, we’ll feel fuller sooner, and stay fuller for longer. This means we’ll probably lose weight, and keep losing weight. Or at least: losing fat. Muscle mass will stay or go depending on what you’re doing with your muscles.

    If you want to keep your body fat percentage at a certain level and not go below it, you may well need to reintroduce grains to your diet, which isn’t something that Dr. Fuhrman covers in this book.

    Bottom line: this is a good, science-based approach for healthily losing weight (specifically, fat) and keeping it off. It might be a little too good at this for some people though.

    Click here to check out Eat To Live and decide what point you want to stop losing weight at!

    Share This Post

Related Posts

  • How Primary Care Is Being Disrupted: A Video Primer
  • Blueberry & Banana Collagen Baked Oats

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Good news for vegans/vegetarians! While we include an optional tablespoon of collagen powder in this recipe, the whole recipe is already geared around collagen synthesis, so it’s very collagen-boosting even with just the plants, providing collagen’s building blocks of protein, zinc, and vitamins C and D (your miraculous body will use these to assemble the collagen inside you).

    You will need

    • 2 cups oats, whence the protein and zinc
    • 1 cup milk (your preference what kind; we recommend almond for flavor; whether you choose plant or animal though, it should be fortified with vitamin D)
    • 2 bananas, peeled and mashed
    • 4 oz blueberries, whence the vitamin C (frozen is fine) (chopped dried apricots are also a fine substitute if that’s more convenient)
    • 1 oz flaked almonds, whence the protein and zinc
    • 1 tbsp pumpkin seeds, whence the protein and zinc
    • 1 tbsp flax seeds, whence the protein and zinc
    • Optional: 1 tbsp maple syrup
    • Optional: 1 tbsp collagen powder, dissolved in 1 oz hot water

    Method

    (we suggest you read everything at least once before doing anything)

    1) Preheat the oven to 350℉ / 180℃.

    2) Mix the oats with 2 cups boiling water; allow to stand for 10–15 minutes, and then drain any excess water.

    3) Mix the mashed bananas with the remaining ingredients except the milk and blueberries, stirring thoroughly.

    4) Add the softened oats, and stir those in thoroughly too.

    5) Add the milk and blueberries, in that order, stirring gently if using fresh blueberries, lest they get crushed.

    6) Pour the mixture into an 8″ square cake tin that you have lined with baking paper, and smooth the top.

    7) Bake for about 40 minutes or until firm and golden brown. Allow to cool; it will firm up more while it does.

    8) Cut into squares or bars, and serve or store for later.

    Enjoy!

    Want to learn more?

    For those interested in some of the science of what we have going on today:

    We Are Such Stuff As Fish Are Made Of ← our main feature about collagen

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • The Truth About Chocolate & Skin Health

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    It’s Q&A Day at 10almonds!

    Have a question or a request? We love to hear from you!

    In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!

    As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!

    So, no question/request too big or small

    ❝What’s the science on chocolate and acne? Asking for a family member❞

    The science is: these two things are broadly unrelated to each other.

    There was a very illustrative study done specifically for this, though!

    ❝65 subjects with moderate acne ate either a bar containing ten times the amount of chocolate in a typical bar, or an identical-appearing bar which contained no chocolate. Counting of all the lesions on one side of the face before and after each ingestion period indicated no difference between the bars.

    Five normal subjects ingested two enriched chocolate bars daily for one month; this represented a daily addition of the diet of 1,200 calories, of which about half was vegetable fat. This excessive intake of chocolate and fat did not alter the composition or output of sebum.

    A review of studies purporting to show that diets high in carbohydrate or fat stimulate sebaceous secretion and adversely affect acne vulgaris indicates that these claims are unproved.

    ~ Dr. James Fulton et al.

    Source: Effect of Chocolate on Acne Vulgaris

    As for what might help against acne more than needlessly abstaining from chocolate:

    Why Do We Have Pores, And Could We Not?

    …as well as:

    Of Brains & Breakouts: The Neuroscience Of Your Skin

    And here are some other articles that might interest you about chocolate:

    Enjoy! And while we have your attention… Would you like this section to be bigger? If so, send us more questions!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • The Brain-Gut Highway: A Two-Way Street

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    The Brain-Gut Two-Way Highway

    This is Dr. Emeran Mayer. He has the rather niche dual specialty of being a gastroenterologist and a neurologist. He has published over 353 peer reviewed scientific articles, and he’s a professor in the Departments of Medicine, Physiology, and Psychiatry at UCLA. Much of his work has been pioneering medical research into gut-brain interactions.

    We know the brain and gut are connected. What else does he want us to know?

    First, that it is a two-way interaction. It’s about 90% “gut tells the brain things”, but it’s also 10% “brain tells the gut things”, and that 10% can make more like a 20% difference, if for example we look at the swing between “brain using that 10% communication to tell gut to do things worse” or “brain using that 10% communication to tell gut to do things better”, vs the midpoint null hypothesis of “what the gut would be doing with no direction from the brain”.

    For example, if we are experiencing unmanaged chronic stress, that is going to tell our gut to do things that had an evolutionary advantage 20,000–200,000 years ago. Those things will not help us now. We do not need cortisol highs and adrenal dumping because we ate a piece of bread while stressed.

    Read more (by Dr. Mayer): The Stress That Evolution Has Not Prepared Us For

    With this in mind, if we want to look after our gut, then we can start before we even put anything in our mouths. Dr. Mayer recommends managing stress, anxiety, and depression from the head downwards as well as from the gut upwards.

    Here’s what we at 10almonds have written previously on how to manage those things:

    Do eat for gut health! Yes, even if…

    Unsurprisingly, Dr. Mayer advocates for a gut-friendly, anti-inflammatory diet. We’ve written about these things before:

    …but there’s just one problem:

    For some people, such as with IBS, Crohn’s, and colitis, the Mediterranean diet that we (10almonds and Dr. Mayer) generally advocate for, is inaccessible. If you (if you have those conditions) eat as we describe, a combination of the fiber in many vegetables and the FODMAPs* in many fruits, will give you a very bad time indeed.

    *Fermentable Oligo-, Di-, Monosaccharides And Polyols

    Dr. Mayer has the answer to this riddle, and he’s not just guessing; he and his team did science to it. In a study with hundreds of participants, he measured what happened with adherence (or not) to the Mediterranean diet (or modified Mediterranean diet) (or not), in participants with IBS (or not).

    The results and conclusions from that study included:

    ❝Among IBS participants, a higher consumption of fruits, vegetables, sugar, and butter was associated with a greater severity of IBS symptoms. Multivariate analysis identified several Mediterranean Diet foods to be associated with increased IBS symptoms.

    A higher adherence to symptom-modified Mediterranean Diet was associated with a lower abundance of potentially harmful Faecalitalea, Streptococcus, and Intestinibacter, and higher abundance of potentially beneficial Holdemanella from the Firmicutes phylum.

    A standard Mediterranean Diet was not associated with IBS symptom severity, although certain Mediterranean Diet foods were associated with increased IBS symptoms. Our study suggests that standard Mediterranean Diet may not be suitable for all patients with IBS and likely needs to be personalized in those with increased symptoms.❞

    In graphical form:

    And if you’d like to read more about this (along with more details on which specific foods to include or exclude to get these results), you can do so…

    Want to know more?

    Dr. Mayer offers many resources, including a blog, books, recipes, podcasts, and even a YouTube channel:

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: