Feta Cheese vs Mozzarella – Which is Healthier?

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Our Verdict

When comparing feta to mozzarella, we picked the mozzarella.

Why?

There are possible arguments for both, but there are a couple of factors that we think tip the balance.

In terms of macronutrients, feta has more fat, of which, more saturated fat, and more cholesterol. Meanwhile, mozzarella has about twice the protein, which is substantial for a cheese. So this section’s a fair win for mozzarella.

In the category of vitamins, however, feta wins with more of vitamins B1, B2, B3, B6, B9, B12, D, & E. In contrast, mozzarella boasts only a little more vitamin A and choline. An easy win for feta in this section.

When it comes to minerals, the matter is decided, we say. Mozzarella has more calcium, magnesium, phosphorus, and potassium, while feta has more copper, iron, and (which counts against it) sodium. A win for mozzarella.

About that sodium… A cup of mozzarella contains about 3% of the RDA of sodium, while a cup of feta contains about 120% of the RDA of sodium. You see the problem? So, while mozzarella was already winning based on adding up the previous categories, the sodium content alone is a reason to choose mozzarella for your salad rather than feta.

That settles it, but just before we close, we’ll mention that they do both have great gut-healthy properties, containing healthy probiotics.

In short: if it weren’t for the difference in sodium content, this would be a narrow win for mozzarella. As it is, however, it’s a clear win.

Want to learn more?

You might like to read:

Take care!

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • Non-Alcohol Mouthwash vs Alcohol Mouthwash – Which is Healthier?
  • CBD Oil
    Diving back into CBD oil’s evolving research and ethical considerations of animal studies – the 10almonds team responds to your burning questions.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Can You Repair Your Own Teeth At Home?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    It’s Q&A Day at 10almonds!

    Have a question or a request? We love to hear from you!

    In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!

    As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!

    So, no question/request too big or small 😎

    ❝I liked your article on tooth remineralization, I saw a “home tooth repair kit”, and wondered if it is as good as what dentists do, or at least will do the job well enough to save a dentist visit?❞

    Firstly, for any wondering about the tooth remineralization, here you go:

    Tooth Remineralization: How To Heal Your Teeth Naturally

    Now, to answer your question, we presume you are talking about something like this kit available on Amazon. In which case, some things to bear in mind:

    • This kind of thing is generally intended as a stop-gap measure until you see a dentist, because you cracked your tooth or lost a filling or something today, and will see the dentist next week, say.
    • This kind of thing is not what Dr. Michelle Jorgensen was talking about in another video* that we wrote about; rather, it is using a polymer filler to rebuild what is missing. The key difference is: this is using plastic, which is not what your teeth are made of, so it will never “take” as part of the tooth, as some biomimetic dentistry options can do.
    • Yes, this does also mean you are putting microplastics (because the powder is usually micronized polymer beads with zinc oxide, to which you add a liquid to create a paste that will set) in your mouth and quite possibly right next to an open blood supply depending on what’s damaged and whether capillaries were reaching it.
    • Because of the different material and application method, the adhesion is nothing like professional fillings (be they metal or resin), and thus the chances of it coming out again or so high that it’s more a question of when, rather than if.
    • If you have damage under there (as we presume you do in any scenario where you are using this), then if it’s not professionally cleaned before the filling goes in, then it can get infected, and (less dramatically, but still importantly) any extant decay can also get worse. We say “professionally”, because you will not be able to do an adequate job with your toothbrush, floss, etc at home, and even if you got dentist’s tools (which you can buy, by the way, but we don’t recommend), you will no more be able to do the same quality job as a dentist who has done that many times a day every day for the past 20 years, as buying expensive paintbrushes would make you able to restore a Renaissance painting without messing it up.

    *See: Dangers Of Root Canals And Crowns, & What To Do Instead ← what she recommends instead is biomimetic dentistry, which is also more prosaically called “conservative restorative dentistry”, i.e. it tries to conserve as much as possible, replace lost material on a like-for-like basis, and generally end up with a result that’s as close to natural as possible.

    In other words, the short answer to your question is “no, sorry, it isn’t and it won’t”

    However! A just like it’s good to have a first aid kit in the house even if it won’t do the same job as an ambulance crew, it can be good to have a tooth repair kit (essentially, a tooth first-aid kit) in the house, precisely to use it just as a stop-gap measure in the event that you one day crack a tooth or lose a filling or such, and don’t want to leave it open to all things in the meantime.

    (The results of this sort of kit are so not long-term in nature that it will be quick and easy for your dentist to remove it to do their own job once you get there)

    If in doubt, always see your dentist as soon as possible, as many things are a lot less work to treat now, than to treat later. Just, make sure to advocate for yourself and what you actually want/need, and don’t let them upsell you on something you didn’t come in for while you’re sitting in their chair—that’s a conversation to be had in advance with a clear head and no pressure (and nobody’s hands in your mouth)!

    See also: Dentists Are Pulling ‘Healthy’ and Treatable Teeth To Profit From Implants, Experts Warn

    Take care!

    Share This Post

  • Early Detection May Help Kentucky Tamp Down Its Lung Cancer Crisis

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Anthony Stumbo’s heart sank after the doctor shared his mother’s chest X-ray.

    “I remember that drive home, bringing her back home, and we basically cried,” said the internal medicine physician, who had started practicing in eastern Kentucky near his childhood home shortly before his mother began feeling ill. “Nobody wants to get told they’ve got inoperable lung cancer. I cried because I knew what this meant for her.”

    Now Stumbo, whose mother died the following year, in 1997, is among a group of Kentucky clinicians and researchers determined to rewrite the script for other families by promoting training and boosting awareness about early detection in the state with the highest lung cancer death rate. For the past decade, Kentucky researchers have promoted lung cancer screening, first recommended by the U.S. Preventive Services Task Force in 2013. These days the Bluegrass State screens more residents who are at high risk of developing lung cancer than any state except Massachusetts — 10.6% of eligible residents in 2022, more than double the national rate of 4.5% — according to the most recent American Lung Association analysis.

    The effort has been driven by a research initiative called the Kentucky LEADS (Lung Cancer Education, Awareness, Detection, and Survivorship) Collaborative, which in 2014 launched to improve screening and prevention, to identify more tumors earlier, when survival odds are far better. The group has worked with clinicians and hospital administrators statewide to boost screening rates both in urban areas and regions far removed from academic medical centers, such as rural Appalachia. But, a decade into the program, the researchers face an ongoing challenge as they encourage more people to get tested, namely the fear and stigma that swirl around smoking and lung cancer.

    Lung cancer kills more Americans than any other malignancy, and the death rates are worst in a swath of states including Kentucky and its neighbors Tennessee and West Virginia, and stretching south to Mississippi and Louisiana, according to data from the Centers for Disease Control and Prevention.

    It’s a bit early to see the impact on lung cancer deaths because people may still live for years with a malignancy, LEADS researchers said. Plus, treatment improvements and other factors may also help reduce death rates along with increased screening. Still, data already shows that more cancers in Kentucky are being detected before they become advanced, and thus more difficult to treat, they said. Of total lung cancer cases statewide, the percentage of advanced cases — defined as cancers that had spread to the lymph nodes or beyond — hovered near 81% between 2000 and 2014, according to Kentucky Cancer Registry data. By 2020, that number had declined to 72%, according to the most recent data available.

    “We are changing the story of families. And there is hope where there has not been hope before,” said Jennifer Knight, a LEADS principal investigator.

    Older adults in their 60s and 70s can hold a particularly bleak view of their mortality odds, given what their loved ones experienced before screening became available, said Ashley Shemwell, a nurse navigator for the lung cancer screening program at Owensboro Health, a nonprofit health system that serves Kentucky and Indiana.

    “A lot of them will say, ‘It doesn’t matter if I get lung cancer or not because it’s going to kill me. So I don’t want to know,’” said Shemwell. “With that generation, they saw a lot of lung cancers and a lot of deaths. And it was terrible deaths because they were stage 4 lung cancers.” But she reminds them that lung cancer is much more treatable if caught before it spreads.

    The collaborative works with several partners, including the University of Kentucky, the University of Louisville, and GO2 for Lung Cancer, and has received grant funding from the Bristol Myers Squibb Foundation. Leaders have provided training and other support to 10 hospital-based screening programs, including a stipend to pay for resources such as educational materials or a nurse navigator, Knight said. In 2022, state lawmakers established a statewide lung cancer screening program based in part on the group’s work.

    Jacob Sands, a lung cancer physician at Boston’s Dana-Farber Cancer Institute, credits the LEADS collaborative with encouraging patients to return for annual screening and follow-up testing for any suspicious nodules. “What the Kentucky LEADS program is doing is fantastic, and that is how you really move the needle in implementing lung screening on a larger scale,” said Sands, who isn’t affiliated with the Kentucky program and serves as a volunteer spokesperson for the American Lung Association.

    In 2014, Kentucky expanded Medicaid, increasing the number of lower-income people who qualified for lung cancer screening and any related treatment. Adults 50 to 80 years old are advised to get a CT scan every year if they have accumulated at least 20 pack years and still smoke or have quit within the past 15 years, according to the latest task force recommendation, which widened the pool of eligible adults. (To calculate pack years, multiply the packs of cigarettes smoked daily by years of smoking.) The lung association offers an online quiz, called “Saved By The Scan,” to figure out likely eligibility for insurance coverage.

    Half of U.S. patients aren’t diagnosed until their cancer has spread beyond the lungs and lymph nodes to elsewhere in the body. By then, the five-year survival rate is 8.2%.

    But regular screening boosts those odds. When a CT scan detects lung cancer early, patients have an 81% chance of living at least 20 years, according to data published in November in the journal Radiology.

    Some adults, like Lisa Ayers, didn’t realize lung cancer screening was an option. Her family doctor recommended a CT scan last year after she reported breathing difficulties. Ayers, who lives in Ohio near the Kentucky border, got screened at UK King’s Daughters, a hospital in far eastern Kentucky. The scan didn’t take much time, and she didn’t have to undress, the 57-year-old said. “It took me longer to park,” she quipped.

    She was diagnosed with a lung carcinoid tumor, a type of neuroendocrine cancer that can grow in various parts of the body. Her cancer was considered too risky for surgery, Ayers said. A biopsy showed the cancer was slow-growing, and her doctors said they would monitor it closely.

    Ayers, a lifelong smoker, recalled her doctor said that her type of cancer isn’t typically linked to smoking. But she quit anyway, feeling like she’d been given a second chance to avoid developing a smoking-related cancer. “It was a big wake-up call for me.”

    Adults with a smoking history often report being treated poorly by medical professionals, said Jamie Studts, a health psychologist and a LEADS principal investigator, who has been involved with the research from the start. The goal is to avoid stigmatizing people and instead to build rapport, meeting them where they are that day, he said.

    “If someone tells us that they’re not ready to quit smoking but they want to have lung cancer screening, awesome; we’d love to help,” Studts said. “You know what? You actually develop a relationship with an individual by accepting, ‘No.’”

    Nationally, screening rates vary widely. Massachusetts reaches 11.9% of eligible residents, while California ranks last, screening just 0.7%, according to the lung association analysis.

    That data likely doesn’t capture all California screenings, as it may not include CT scans done through large managed care organizations, said Raquel Arias, a Los Angeles-based associate director of state partnerships at the American Cancer Society. She cited other 2022 data for California, looking at lung cancer screening for eligible Medicare fee-for-service patients, which found a screening rate of 1%-2% in that population.

    But, Arias said, the state’s effort is “nowhere near what it needs to be.”

    The low smoking rate in California, along with its image as a healthy state, “seems to have come with the unintended consequence of further stigmatizing people who smoke,” said Arias, citing one of the findings from a 2022 report looking at lung cancer screening barriers. For instance, eligible patients may be reluctant to share prior smoking habits with their health provider, she said.

    Meanwhile, Kentucky screening efforts progress, scan by scan.

    At Appalachian Regional Healthcare, 3,071 patients were screened in 2023, compared with 372 in 2017. “We’re just scratching the surface of the potential lives that we can have an effect on,” said Stumbo, a lung cancer screening champion at the health system, which includes 14 hospitals, most located in eastern Kentucky.

    The doctor hasn’t shed his own grief about what his family missed after his mother died at age 51, long before annual screening was recommended. “Knowing that my children were born, and never knowing their grandmother,” he said, “just how sad is that?”

    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    Subscribe to KFF Health News’ free Morning Briefing.

    Share This Post

  • The Best Mobility Exercises For Each Joint

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Stiff joints and tight muscles limit movement, performance, and daily activities. They also increase the risk of injury, and increase recovery time if the injury happens. So, it’s pretty important to take care of that!

    Here’s how

    Key to joint health involves understanding mobility, flexibility, and stability:

    • Mobility: active joint movement through a range of motion.
    • Flexibility: muscle lengthening passively through a range of motion.
    • Stability: body’s ability to return to position after disturbance.

    Different body parts have different needs when it comes to prioritizing mobility, flexibility, and stability exercises. So, with that in mind, here’s what to do for your…

    • Wrists: flexibility and stability (e.g., wrist circles, loaded flexions/extensions).
    • Elbows: Stability is key; exercises like wrist and shoulder movements benefit elbows indirectly.
    • Shoulders: mobility and stability; exercises include prone arm circles, passive hangs, active prone raises, easy bridges, and stick-supported movements.
    • Spine: mobility and stability; recommended exercises include cat-cow and quadruped reach.
    • Hips: mobility and flexibility through deep squat hip rotations; beginners can use hands for support.
    • Knees: stability; exercises include elevated pistols, Bulgarian split squats, lunges, and single-leg balancing.
    • Ankles: flexibility and stability; exercises include lunges, prying goblet squats, and deep squats with support if necessary.

    For more on all of these, plus visual demonstrations, enjoy:

    Click Here If The Embedded Video Doesn’t Load Automatically!

    Want to learn more?

    You might also like to read:

    Building & Maintaining Mobility

    Take care!

    Share This Post

Related Posts

  • Non-Alcohol Mouthwash vs Alcohol Mouthwash – Which is Healthier?
  • How do science journalists decide whether a psychology study is worth covering?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

    Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

    But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

    Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

    The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

    University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

    But there’s nuance to the findings, the authors note.

    “I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

    Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

    Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

    “Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

    “This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

    More on the study’s findings

    The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

    “As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

    Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

    The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

    Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

    Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

    “Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

    Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

    For instance, one of the vignettes reads:

    “Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

    In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

    Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

    Considering statistical significance

    When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

    Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

    “Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

    Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

    In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

    • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
    • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
    • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
    • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

    Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

    What other research shows about science journalists

    A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

    A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

    More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

    Advice for journalists

    We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

    1. Examine the study before reporting it.

    Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

    Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

    How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

    Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

    “Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

    Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

    Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

    Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

    2. Zoom in on data.

    Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

    What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

    But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

    How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

    Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

    3. Talk to scientists not involved in the study.

    If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

    Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

    4. Remember that a single study is simply one piece of a growing body of evidence.

    “I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

    Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

    Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

    “We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

    5. Remind readers that science is always changing.

    “Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

    Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

    Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

    The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

    Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

    Additional reading

    Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
    Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

    The Problem with Psychological Research in the Media
    Steven Stosny. Psychology Today, September 2022.

    Critically Evaluating Claims
    Megha Satyanarayana, The Open Notebook, January 2022.

    How Should Journalists Report a Scientific Study?
    Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

    What Journalists Get Wrong About Social Science: Full Responses
    Brian Resnick. Vox, January 2016.

    From The Journalist’s Resource

    8 Ways Journalists Can Access Academic Research for Free

    5 Things Journalists Need to Know About Statistical Significance

    5 Common Research Designs: A Quick Primer for Journalists

    5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

    Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

    What’s Standard Deviation? 4 Things Journalists Need to Know

    This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Qigong: A Breath Of Fresh Air?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Qigong: Breathing Is Good (Magic Remains Unverified)

    In Tuesday’s newsletter, we asked you for your opinions of qigong, and got the above-depicted, below-described, set of responses:

    • About 55% said “Qigong is just breathing, but breathing exercises are good for the health”
    • About 41% said “Qigong helps regulate our qi and thus imbue us with healthy vitality”
    • One (1) person said “Qigong is a mystical waste of time and any benefits are just placebo”

    The sample size was a little low for this one, but the results were quite clearly favorable, one way or another.

    So what does the science say?

    Qigong is just breathing: True or False?

    True or False, depending on how we want to define it—because qigong ranges in its presentation from indeed “just breathing exercises”, to “breathing exercises with visualization” to “special breathing exercises with visualization that have to be exactly this way, with these hand and sometimes body movements also, which also must be just right”, to far more complex definitions that involve qi by various mystical definitions, and/or an appeal to a scientific analog of qi; often some kind of bioelectrical field or such.

    There is, it must be said, no good quality evidence for the existence of qi.

    Writer’s note, lest 41% of you want my head now: I’ve been practicing qigong and related arts for about 30 years and find such to be of great merit. This personal experience and understanding does not, however, change the state of affairs when it comes to the availability (or rather, the lack) of high quality clinical evidence to point to.

    Which is not to say there is no clinical evidence, for example:

    Acute Physiological and Psychological Effects of Qigong Exercise in Older Practitioners

    …found that qigong indeed increased meridian electrical conductance!

    Except… Electrical conductance is measured with galvanic skin responses, which increase with sweat. But don’t worry, to control for that, they asked participants to dry themselves with a towel. Unfortunately, this overlooks the fact that a) more sweat can come where that came from, because the body will continue until it is satisfied of adequate homeostasis, and b) drying oneself with a towel will remove the moisture better than it’ll remove the salts from the skin—bearing in mind that it’s mostly the salts, rather than the moisture itself, that improve the conductivity (pure distilled water does conduct electricity, but not very well).

    In other words, this was shoddy methodology. How did it pass peer review? Well, here’s an insight into that journal’s peer review process…

    ❝The peer-review system of EBCAM is farcical: potential authors who send their submissions to EBCAM are invited to suggest their preferred reviewers who subsequently are almost invariably appointed to do the job. It goes without saying that such a system is prone to all sorts of serious failures; in fact, this is not peer-review at all, in my opinion, it is an unethical sham.❞

    ~ Dr. Edzard Ernst, a founding editor of EBCAM (he since left, and decries what has happened to it since)

    One of the other key problems is: how does one test qigong against placebo?

    Scientists have looked into this question, and their answers have thus far been unsatisfying, and generally to the tune of the true-but-unhelpful statement that “future research needs to be better”:

    Problems of scientific methodology related to placebo control in Qigong studies: A systematic review

    Most studies into qigong are interventional studies, that is to say, they measure people’s metrics (for example, blood pressure, heart rate, maybe immune function biomarkers, sleep quality metrics of various kinds, subjective reports of stress levels, physical biomarkers of stress levels, things like that), then do a course of qigong (perhaps 6 weeks, for example), then measure them again, and see if the course of qigong improved things.

    This almost always results in an improvement when looking at the before-and-after, but it says nothing for whether the benefits were purely placebo.

    We did find one study that claimed to be placebo-controlled:

    A placebo-controlled trial of ‘one-minute qigong exercise’ on the reduction of blood pressure among patients with essential hypertension

    …but upon reading the paper itself carefully, it turned out that while the experimental group did qigong, the control group did a reading exercise. Which is… Saying how well qigong performs vs reading (qigong did outperform reading, for the record), but nothing for how well it performs vs placebo, because reading isn’t a remotely credible placebo.

    See also: Placebo Effect: Making Things Work Since… Well, A Very Long Time Ago ← this one explains a lot about how placebo effect does work

    Qigong is a mystical waste of time: True or False?

    False! This one we can answer easily. Interventional studies invariably find it does help, and the fact remains that even if placebo is its primary mechanism of action, it is of benefit and therefore not a waste of time.

    Which is not to say that placebo is its only, or even necessarily primary, mechanism of action.

    Even from a purely empirical evidence-based medicine point of view, qigong is at the very least breathing exercises plus (usually) some low-impact body movement. Those are already two things that can be looked at, mechanistic processes pointed to, and declarations confidently made of “this is an activity that’s beneficial for health”.

    See for example:

    …and those are all from respectable journals with meaningful peer review processes.

    None of them are placebo-controlled, because there is no real option of “and group B will only be tricked into believing they are doing deep breathing exercises with low-impact movements”; that’s impossible.

    But! They each show how doing qigong reliably outperforms not doing qigong for various measurable metrics of health.

    And, we chose examples with physical symptoms and where possible empirically measurable outcomes (such as COVID-19 infection levels, or inflammatory responses); there are reams of studies showings qigong improves purely subjective wellbeing—but the latter could probably be claimed for any enjoyable activity, whereas changes in inflammatory biomarkers, not such much.

    In short: for most people, it indeed reliably helps with many things. And importantly, it has no particular risks associated with it, and it’s almost universally framed as a complementary therapy rather than an alternative therapy.

    This is critical, because it means that whereas someone may hold off on taking evidence-based medicines while trying out (for example) homeopathy, few people are likely to hold off on other treatments while trying out qigong—since it’s being viewed as a helper rather than a Hail-Mary.

    Want to read more about qigong?

    Here’s the NIH’s National Center for Complementary and Integrative Health has to say. It cites a lot of poor quality science, but it does mention when the science it’s citing is of poor quality, and over all gives quite a rounded view:

    Qigong: What You Need To Know

    Enjoy!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • The Mediterranean Diet Cookbook for Beginners – by Jessica Aledo

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    There are a lot of Mediterranean Diet books on the market, and not all of them actually stick to the Mediterranean Diet. There’s a common mistake of thinking “Well, this dish is from the Mediterranean region, so…”, but that doesn’t make, for example, bacon-laden carbonara part of the Mediterranean Diet!

    Jessica Aledo does better, and sticks unwaveringly to the Mediterranean Diet principles.

    First, she gives a broad introduction, covering:

    • The Mediterranean Diet pyramid
    • Foods to eat on the Mediterranean Diet
    • Foods to avoid on the Mediterranean Diet
    • Benefits of the Mediterranean Diet

    Then, it’s straight into the recipes, of which there are 201 (as with many recipe books, the title is a little misleading about this).

    They’re divided into sections, thus:

    • Breakfasts
    • Lunches
    • Snacks
    • Dinners
    • Desserts

    The recipes are clear and simple, one per double-page, with high quality color illustrations. They give ingredients/directions/nutrients. There’s no padding!

    Helpfully, she does include a shopping list as an appendix, which is really useful!

    Bottom line: if you’re looking to build your Mediterranean Diet repertoire, this book is an excellent choice.

    Get your copy of The Mediterranean Diet Cookbook for Beginners from Amazon today!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: