Spirulina vs Nori – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing spirulina to nori, we picked the nori.
Why?
In the battle of the seaweeds, if spirulina is a superfood (and it is), then nori is a super-dooperfood. So today is one of those “a very nutritious food making another very nutritious food look bad by standing next to it” days. With that in mind…
In terms of macros, they’re close to identical. They’re both mostly water with protein, carbs, and fiber. Technically nori is higher in carbs, but we’re talking about 2.5g/100g difference.
In the category of vitamins, spirulina has more vitamin B1, while nori has a lot more of vitamins A, B2, B3, B5, B6, B9, C, E, K, and choline.
When it comes to minerals, it’s a little closer but still a clear win for nori; spirulina has more copper, iron, and magnesium, while nori has more calcium, manganese, phosphorus, potassium, and zinc.
Want to try some nori? Here’s an example product on Amazon 😎
Want to learn more?
You might like to read:
21% Stronger Bones in a Year at 62? Yes, It’s Possible (No Calcium Supplements Needed!) ← nori was an important part of the diet enjoyed here
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
The 6 Dimensions Of Sleep (And Why They Matter)
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
How Good Is Your Sleep, Really?
This is Dr. Marie-Pierre St-Onge, Director of Columbia University’s Center of Excellence for Sleep and Circadian Research.
The focus of Dr. St-Onge’s research is the study of the impact of lifestyle, especially sleep and diet, on cardio-metabolic risk factors.
She conducts clinical research combining her expertise on sleep, nutrition, and energy regulation.
What kind of things do her studies look at?
Her work focuses on questions about…
- The role of circadian rhythms (including sleep duration and timing)
- Meal timing and eating patterns
…and their impact on cardio-metabolic risk.
What does she want us to know?
First things first, when not to worry:
❝Getting a bad night’s sleep once in a while isn’t anything to worry about. That’s what we would describe as transient insomnia. Chronic insomnia occurs when you spend three months or more without regular sleep, and that is when I would start to be concerned.❞
But… as prevention is (as ever) better than cure, she also advises that we do pay attention to our sleep! And, as for how to do that…
The Six Dimensions of Sleep
One useful definition of overall sleep health is the RU-Sated framework, which assesses six key dimensions of sleep that have been consistently associated with better health outcomes. These are:
- regularity
- satisfaction with sleep
- alertness during waking hours
- timing of sleep
- efficiency of sleep
- duration of sleep
You’ll notice that some of these things you can only really know if you use a sleep-monitoring app. She does recommend the use of those, and so do we!
We reviewed and compared some of the most popular sleep-monitoring apps! You can check them out here: Time For Some Pillow Talk
You also might like…
We’re not all the same with regard to when is the best time for us to sleep, so:
Use This Sleep Cycle Calculator To Figure Out the Optimal Time for You To Go to Bed and Wake Up
AROUND THE WEB
What’s happening in the health world…
- Aspirin may make your breathing worse
- Taking naps for more than 30 minutes may raise your metabolic disease risk
- How to ease back into exercise after surgery
- Study provides evidence that breathing exercises may reduce your Alzheimer’s risk
- No one in movies knows how to swallow a pill
More to come tomorrow!
Share This Post
Shame and blame can create barriers to vaccination
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Understanding the stigma surrounding infectious diseases like HIV and mpox may help community health workers break down barriers that hinder access to care.
Looking back in history can provide valuable lessons to confront stigma in health care today, especially toward Black, Latine, LGBTQ+, and other historically underserved communities disproportionately affected by COVID-19 and HIV.
Public Good News spoke with Sam Brown, HIV prevention and wellness program manager at Civic Heart, a community-based organization in Houston’s historic Third Ward, to understand the effects of stigma around sexual health and vaccine uptake.
Brown shared more about Civic Heart’s efforts to provide free confidential testing for sexually transmitted infections, counseling and referrals, and information about COVID-19, flu, and mpox vaccinations, as well as the lessons they’re learning as they strive for vaccine equity.
Here’s what Brown said.
[Editor’s note: This content has been edited for clarity and length.]
PGN: Some people on social media have spread the myth that vaccines cause AIDS or other immune deficiencies when the opposite is true: Vaccines strengthen our immune systems to help protect against disease. Despite being frequently debunked, how do false claims like these impact the communities you serve?
Sam Brown: Misinformation like that is so hard to combat. And it makes the work and the path to overall community health hard because people will believe it. In the work that we do, 80 percent of it is changing people’s perspective on something they thought they knew.
You know, people don’t even transmit AIDS. People transmit HIV. So, a vaccine causing immunodeficiency doesn’t make sense.
With the communities we serve, we might have a person that will believe the myth, and because they believe it, they won’t get vaccinated. Then later, they may test positive for COVID-19.
And depending on social determinants of health, it can impact them in a whole heap of ways: That person is now missing work, they’re not able to provide for their family—if they have a family. It’s this mindset that can impact a person’s life, their income, their ability to function.
So, to not take advantage of something like a vaccine that’s affordable, or free for the most part, just because of misinformation or a misunderstanding—that’s detrimental, you know.
For example, when we talk to people in the community, many don’t know that they can get mpox from their pet, or that it’s zoonotic—that means that it can be transferred between different species or different beings, from animals to people. I see a lot of surprise and shock [when people learn this].
It’s difficult because we have to fight the misinformation and the stigma that comes with it. And it can be a big barrier.
People misunderstand. [They] think that “this is something that gay people or the LGBTQ+ community get,” which is stigmatizing and comes off as blaming. And blaming is the thing that leads us to be misinformed.
PGN: In the last couple years, your organization’s HIV Wellness program has taken on promoting COVID-19, flu, and mpox vaccines to the communities you serve. How do you navigate conversations between sexual health and infectious diseases? Can you share more about your messaging strategies?
S.B.: As we promoted positive sexual health and HIV prevention, we saw people were tired of hearing about HIV. They were tired of hearing about how PrEP works, or how to prevent HIV.
But, when we had an outbreak of syphilis in Houston just last year, people were more inclined to test because of the severity of the outbreak.
So, what our team learned is that sometimes you have to change the message to get people what they need.
We changed our message to highlight more syphilis information and saw that we were able to get more people tested for HIV because we correlated how syphilis and HIV are connected and how a person can be susceptible to both.
Using messages that the community wants and pairing them with what the community needs has been better for us. And we see that same thing with COVID-19, the flu, and RSV. Sometimes you just can’t be married to a message. We’ve had to be flexible to meet our clients where they are to help them move from unsafe practices to practices that are healthy and good for them and their communities.
PGN: You’ve mentioned how hard it is to combat stigma in your work. How do you effectively address it when talking to people one-on-one?
S.B.: What I understand is that no one wants to feel shame. What I see people respond to is, “Here’s an opportunity to do something different. Maybe there was information that you didn’t know that caused you to make a bad decision. And now here’s an opportunity to gain information so that you can make a better decision.”
People want to do what they want to do; they want to live how they want to live. And we all should be able to do that as long as it’s not hurting anyone, but also being responsible enough to understand that, you know, COVID-19 is here.
So, instead of shaming and blaming, it’s best to make yourself aware and understand what it is and how to treat it. Because the real enemy is the virus—it’s the infection, not the people.
When we do our work, we want to make sure that we come from a strengths-based approach. We always look at what a client can do, what that client has. We want to make sure that we’re empowering them from that point. So, even if they choose not to prioritize our message right now, we can’t take that personally. We’ll just use it as a chance to try a new way of framing it to help people understand what we’re trying to say.
And sometimes that can be difficult, even for organizations. But getting past that difficulty comes with a greater opportunity to impact someone else.
This article first appeared on Public Good News and is republished here under a Creative Commons license.
Share This Post
Chia Seeds vs Pumpkin Seeds – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing chia seeds to pumpkin seeds, we picked the chia.
Why?
Both are great! But chia is best.
Note: we’re going to abbreviate them both to “chia” and “pumpkin”, respectively, but we’ll still be referring to the seeds throughout.
In terms of macros, pumpkin has a little more protein and notably higher carbs, whereas chia has nearly 2x the fiber, as well as more fat, and/but they are famously healthy fats. We’ll call this category a subjective win for chia, though you might disagree if you want to prioritize an extra 2g of protein per 100g (for pumpkin) over an extra 16g of fiber per 100g (for chia). Chia is also vastly preferable for omega-3.
When it comes to vitamins, pumpkin is marginally higher in vitamin A, while chia is a lot higher in vitamins B1, B2, B3, B9, C, and E. An easy win for chia.
In the category of minerals, for which pumpkin seeds are so famously a good source, chia has a lot more calcium, copper, iron, magnesium, manganese, phosphorus, and selenium. On the other hand, pumpkin has more potassium and zinc. Still, that’s a 7:2 win for chia.
Adding up the categories makes for a very compelling win for the humble chia seed.
Want to learn more?
You might like to read:
If You’re Not Taking Chia, You’re Missing Out: The Tiniest Seeds With The Most Value
Take care!
Share This Post
Related Posts
What the Most Successful People Do Before Breakfast – by Laura Vanderkram
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
First, what this is not:this is not a rehash of “The 5AM Club”, and nor is it a rehash of “The Seven Habits of Highly Effective People”.
What it is: packed with tips about time management for real people operating here in the real world. The kind of people who have non-negotiable time-specific responsibilities, and frequent unavoidable interruptions. The kind of people who have partners, families, and personal goals and aspirations too.
The “two other short guides” mentioned in the subtitle are her other books, whose titles start the same but instead of “…before Breakfast”, substitute:
- …on the Weekend
- …at Work
However, if you’re retired (we know many of our subscribers are), this still applies to you:
- The “weekend” book is about getting the most out of one’s leisure time, and we hope you have that too!
- The “work” book is about not getting lost in the nitty-gritty of the daily grind, and instead making sure to keep track of the big picture. You probably have this in your personal projects, too!
Bottom line: if, in the mornings, it sometimes seems like your get-up-and-go has got up and gone without you, then you will surely benefit from this book that outstrips its competitors in usefulness and applicability.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Strength training has a range of benefits for women. Here are 4 ways to get into weights
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Picture a gym ten years ago: the weights room was largely a male-dominated space, with women mostly doing cardio exercise. Fast-forward to today and you’re likely to see women of all ages and backgrounds confidently navigating weights equipment.
This is more than just anecdotal. According to data from the Australian Sports Commission, the number of women participating in weightlifting (either competitively or not) grew nearly five-fold between 2016 and 2022.
Women are discovering what research has long shown: strength training offers benefits beyond sculpted muscles.
Health benefits
Osteoporosis, a disease in which the bones become weak and brittle, affects more women than men. Strength training increases bone density, a crucial factor for preventing osteoporosis, especially for women negotiating menopause.
Strength training also improves insulin sensitivity, which means your body gets better at using insulin to manage blood sugar levels, reducing the risk of type 2 diabetes. Regular strength training contributes to better heart health too.
There’s a mental health boost as well. Strength training has been linked to reduced symptoms of depression and anxiety.
Improved confidence and body image
Unlike some forms of exercise where progress can feel elusive, strength training offers clear and tangible measures of success. Each time you add more weight to a bar, you are reminded of your ability to meet your goals and conquer challenges.
This sense of achievement doesn’t just stay in the gym – it can change how women see themselves. A recent study found women who regularly lift weights often feel more empowered to make positive changes in their lives and feel ready to face life’s challenges outside the gym.
Strength training also has the potential to positively impact body image. In a world where women are often judged on appearance, lifting weights can shift the focus to function.
Instead of worrying about the number on the scale or fitting into a certain dress size, women often come to appreciate their bodies for what they can do. “Am I lifting more than I could last month?” and “can I carry all my groceries in a single trip?” may become new measures of physical success.
Lifting weights can also be about challenging outdated ideas of how women “should” be. Qualitative research I conducted with colleagues found that, for many women, strength training becomes a powerful form of rebellion against unrealistic beauty standards. As one participant told us:
I wanted something that would allow me to train that just didn’t have anything to do with how I looked.
Society has long told women to be small, quiet and not take up space. But when a woman steps up to a barbell, she’s pushing back against these outdated rules. One woman in our study said:
We don’t have to […] look a certain way, or […] be scared that we can lift heavier weights than some men. Why should we?
This shift in mindset helps women see themselves differently. Instead of worrying about being objects for others to look at, they begin to see their bodies as capable and strong. Another participant explained:
Powerlifting changed my life. It made me see myself, or my body. My body wasn’t my value, it was the vehicle that I was in to execute whatever it was that I was executing in life.
This newfound confidence often spills over into other areas of life. As one woman said:
I love being a strong woman. It’s like going against the grain, and it empowers me. When I’m physically strong, everything in the world seems lighter.
Feeling inspired? Here’s how to get started
1. Take things slow
Begin with bodyweight exercises like squats, lunges and push-ups to build a foundation of strength. Once you’re comfortable, add external weights, but keep them light at first. Focus on mastering compound movements, such as deadlifts, squats and overhead presses. These exercises engage multiple joints and muscle groups simultaneously, making your workouts more efficient.
2. Prioritise proper form
Always prioritise proper form over lifting heavier weights. Poor technique can lead to injuries, so learning the correct way to perform each exercise is crucial. To help with this, consider working with an exercise professional who can provide personalised guidance and ensure you’re performing exercises correctly, at least initially.
3. Consistency is key
Like any fitness regimen, consistency is key. Two to three sessions a week are plenty for most women to see benefits. And don’t be afraid to occupy space in the weights room – remember you belong there just as much as anyone else.
4. Find a community
Finally, join a community. There’s nothing like being surrounded by a group of strong women to inspire and motivate you. Engaging with a supportive community can make your strength-training journey more enjoyable and rewarding, whether it’s an in-person class or an online forum.
Are there any downsides?
Gym memberships can be expensive, especially for specialist weightlifting gyms. Home equipment is an option, but quality barbells and weightlifting equipment can come with a hefty price tag.
Also, for women juggling work and family responsibilities, finding time to get to the gym two to three times per week can be challenging.
If you’re concerned about getting too “bulky”, it’s very difficult for women to bulk up like male bodybuilders without pharmaceutical assistance.
The main risks come from poor technique or trying to lift too much too soon – issues that can be easily avoided with some guidance.
Erin Kelly, Lecturer and PhD Candidate, Discipline of Sport and Exercise Science, University of Canberra
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: