How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.
The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.
Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.
How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.
What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.
From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Which Osteoporosis Medication, If Any, Is Right For You?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Which Osteoporosis Medication, If Any, Is Right For You?
We’ve written about osteoporosis before, so here’s a quick recap first in case you missed these:
- The Bare-bones Truth About Osteoporosis
- Exercises To Do (And Exercises To Avoid) If You Have Osteoporosis
- We Are Such Stuff As Fish Are Made Of
- Vit D + Calcium: Too Much Of A Good Thing?
All of those look and diet and/or exercise, with “diet” including supplementation. But what of medications?
So many choices (not all of them right for everyone)
The UK’s Royal Osteoporosis Society says of the very many osteoporosis meds available:
❝In terms of effectiveness, they all reduce your risk of broken bones by roughly the same amount.
Which treatment is right for you will depend on a number of things.❞
…before then going on to list a pageful of things it will depend on, and giving no specific information about what prescriptions or proscriptions may be made based on those factors.
Source: Royal Osteoporosis Society | Which medication should I take?
We’ll try to do better than that here, though we have less space. So let’s get down to it…
First line drug offerings
After diet/supplementation and (if applicable) hormones, the first line of actual drug offerings are generally biphosphates.
Biphosphates work by slowing down your osteoclasts—the cells that break down your bones. They may sound like terrible things to have in the body at all, but remember, your body is always rebuilding itself and destruction is a necessary act to facilitate creation. However, sometimes things can get out of balance, and biphosphates help tip things back into balance.
Common biphosphates include Alendronate/Fosamax, Risedronate/Actonel, Ibandronate/Boniva, and Zolendronic acid/Reclast.
A common downside is that they aren’t absorbed well by the stomach (despite being mostly oral administration, though IV versions exist too) and can cause heartburn / general stomach upset.
An uncommon downside is that messing with the body’s ability to break down bones can cause bones to be rebuilt-in-place slightly incorrectly, which can—paradoxically—cause fractures. But that’s rare and is more common if the drugs are taken in much higher doses (as for bone cancer rather than osteoporosis).
Bone-builders
If you already have low bone density (so you’re fighting to rebuild your bones, not just slow deterioration), then you may need more of a boost.
Bone-building medications include Teriparatide/Forteo, Abaloparatide/Tymlos, and Romosozumab/Evenity.
These are usually given by injection, usually for a course of one or two years.
Once the bone has been built up, it’ll probably be recommended that you switch to a biphosphate or other bone-stabilizing medication.
Estrogen-like effects, without estrogen
If your osteoporosis (or osteoporosis risk) comes from being post-menopausal, estrogen is a very common (and effective!) prescription. However, some people may wish to avoid it, if for example you have a heightened breast cancer risk, which estrogen can exacerbate.
So, medications that have estrogen-like effects post-menopause, but without actually increasing estrogen levels, include: Raloxifene/Evista, and also all the meds we mentioned in the bone-building category above.
Raloxifene/Evista specifically mimics the action of estrogen on bones, while at the same time blocking the effect of estrogen on other tissues.
Learn more…
Want a more thorough grounding than we have room for here? You might find the following resource useful:
List of 82 Osteoporosis Medications Compared (this has a big table which is sortable by various variables)
Take care!
Share This Post
-
The Truth About Handwashing
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Washing Our Hands Of It
In Tuesdays’s newsletter, we asked you how often you wash your hands, and got the above-depicted, below-described, set of self-reported answers:
- About 54% said “More times per day than [the other options]”
- About 38% said “Whenever using the bathroom or kitchen
- About 5% said “Once or twice per day”
- Two (2) said “Only when visibly dirty”
- Two (2) said “I prefer to just use sanitizer gel”
What does the science have to say about this?
People lie about their handwashing habits: True or False?
True and False (since some people lie and some don’t), but there’s science to this too. Here’s a great study from 2021 that used various levels of confidentiality in questioning (i.e., there were ways of asking that made it either obvious or impossible to know who answered how), and found…
❝We analysed data of 1434 participants. In the direct questioning group 94.5% of the participants claimed to practice proper hand hygiene; in the indirect questioning group a significantly lower estimate of only 78.1% was observed.❞
Note: the abstract alone doesn’t make it clear how the anonymization worked (it is explained later in the paper), and it was noted as a limitation of the study that the participants may not have understood how it works well enough to have confidence in it, meaning that the 78.1% is probably also inflated, just not as much as the 94.5% in the direct questioning group.
Here’s a pop-science article that cites a collection of studies, finding such things as for example…
❝With the use of wireless devices to record how many people entered the restroom and used the pumps of the soap dispensers, researchers were able to collect data on almost 200,000 restroom trips over a three-month period.
The found that only 31% of men and 65% of women washed their hands with soap.❞
Source: Study: Men Wash Their Hands Much Less Often Than Women (And People Lie About Washing Their Hands)
Sanitizer gel does the job of washing one’s hands with soap: True or False?
False, though it’s still not a bad option for when soap and water aren’t available or practical. Here’s an educational article about the science of why this is so:
UCI Health | Soap vs. Hand Sanitizer
There’s also some consideration of lab results vs real-world results, because while in principle the alcohol gel is very good at killing most bacteria / inactivating most viruses, it can take up to 4 minutes of alcohol gel contact to do so, as in this study with flu viruses:
In contrast, 20 seconds of handwashing with soap will generally do the job.
Antibacterial soap is better than other soap: True or False?
False, because the main way that soap protects us is not in its antibacterial properties (although it does also destroy the surface membrane of some bacteria and for that matter viruses too, killing/inactivating them, respectively), but rather in how it causes pathogens to simply slide off during washing.
Here’s a study that found that handwashing with soap reduced disease incidence by 50–53%, and…
❝Incidence of disease did not differ significantly between households given plain soap compared with those given antibacterial soap.❞
Read more: Effect of handwashing on child health: a randomised controlled trial
Want to wash your hands more than you do?
There have been many studies into motivating people to wash their hands more (often with education and/or disgust-based shaming), but an effective method you can use for yourself at home is to simply buy more luxurious hand soap, and generally do what you can to make handwashing a more pleasant experience (taking a moment to let the water run warm is another good thing to do if that’s more comfortable for you).
Take care!
Share This Post
-
Women’s Strength Training Anatomy Workouts – by Frédéric Delavier
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
We’ve previously reviewed another book of Delavier’s, “Women’s Strength Training Anatomy“, which itself is great. This book adds a lot of practical advice to that one’s more informational format, but to gain full benefit of this one does not require having read that one.
A common reason that many women avoid strength-training is because they do not want to look muscular. Largely this is based on a faulty assumption, since you will never look like a bodybuilder unless you also eat like a bodybuilder, for example.
However, for those for whom the concern remains, today’s book is an excellent guide to strength-training with aesthetics in mind as well as functionality.
The exercises are divided into sections, thus: round your glutes / tone your quadriceps / shape your hamstrings / trim your calves / flatten your abs / curve your shoulders / develop a pain-free upper back / protect your lower back / enhance your chest / firm up your arms.
As you can see, a lot of these are mindful of aesthetics, but there’s nothing here that’s antithetical to function, and some (especially for example “develop a pain-free upper back” and “protect your lower back“) are very functional indeed.
Bottom line: Delavier’s anatomy and exercise books are top-tier, and this one is no exception. If you are a woman and would like to strength-train (or perhaps you already do, and would like to refine your training), then this book is an excellent choice.
Click here to check out Women’s Strength Training Anatomy Workouts, and have the body you want!
Share This Post
Related Posts
-
America’s Health System Isn’t Ready for the Surge of Seniors With Disabilities
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
The number of older adults with disabilities — difficulty with walking, seeing, hearing, memory, cognition, or performing daily tasks such as bathing or using the bathroom — will soar in the decades ahead, as baby boomers enter their 70s, 80s, and 90s.
But the health care system isn’t ready to address their needs.
That became painfully obvious during the covid-19 pandemic, when older adults with disabilities had trouble getting treatments and hundreds of thousands died. Now, the Department of Health and Human Services and the National Institutes of Health are targeting some failures that led to those problems.
One initiative strengthens access to medical treatments, equipment, and web-based programs for people with disabilities. The other recognizes that people with disabilities, including older adults, are a separate population with special health concerns that need more research and attention.
Lisa Iezzoni, 69, a professor at Harvard Medical School who has lived with multiple sclerosis since her early 20s and is widely considered the godmother of research on disability, called the developments “an important attempt to make health care more equitable for people with disabilities.”
“For too long, medical providers have failed to address change in society, changes in technology, and changes in the kind of assistance that people need,” she said.
Among Iezzoni’s notable findings published in recent years:
Most doctors are biased. In survey results published in 2021, 82% of physicians admitted they believed people with significant disabilities have a worse quality of life than those without impairments. Only 57% said they welcomed disabled patients.
“It’s shocking that so many physicians say they don’t want to care for these patients,” said Eric Campbell, a co-author of the study and professor of medicine at the University of Colorado.
While the findings apply to disabled people of all ages, a larger proportion of older adults live with disabilities than younger age groups. About one-third of people 65 and older — nearly 19 million seniors — have a disability, according to the Institute on Disability at the University of New Hampshire.
Doctors don’t understand their responsibilities. In 2022, Iezzoni, Campbell, and colleagues reported that 36% of physicians had little to no knowledge of their responsibilities under the 1990 Americans With Disabilities Act, indicating a concerning lack of training. The ADA requires medical practices to provide equal access to people with disabilities and accommodate disability-related needs.
Among the practical consequences: Few clinics have height-adjustable tables or mechanical lifts that enable people who are frail or use wheelchairs to receive thorough medical examinations. Only a small number have scales to weigh patients in wheelchairs. And most diagnostic imaging equipment can’t be used by people with serious mobility limitations.
Iezzoni has experienced these issues directly. She relies on a wheelchair and can’t transfer to a fixed-height exam table. She told me she hasn’t been weighed in years.
Among the medical consequences: People with disabilities receive less preventive care and suffer from poorer health than other people, as well as more coexisting medical conditions. Physicians too often rely on incomplete information in making recommendations. There are more barriers to treatment and patients are less satisfied with the care they do get.
Egregiously, during the pandemic, when crisis standards of care were developed, people with disabilities and older adults were deemed low priorities. These standards were meant to ration care, when necessary, given shortages of respirators and other potentially lifesaving interventions.
There’s no starker example of the deleterious confluence of bias against seniors and people with disabilities. Unfortunately, older adults with disabilities routinely encounter these twinned types of discrimination when seeking medical care.
Such discrimination would be explicitly banned under a rule proposed by HHS in September. For the first time in 50 years, it would update Section 504 of the Rehabilitation Act of 1973, a landmark statute that helped establish civil rights for people with disabilities.
The new rule sets specific, enforceable standards for accessible equipment, including exam tables, scales, and diagnostic equipment. And it requires that electronic medical records, medical apps, and websites be made usable for people with various impairments and prohibits treatment policies based on stereotypes about people with disabilities, such as covid-era crisis standards of care.
“This will make a really big difference to disabled people of all ages, especially older adults,” said Alison Barkoff, who heads the HHS Administration for Community Living. She expects the rule to be finalized this year, with provisions related to medical equipment going into effect in 2026. Medical providers will bear extra costs associated with compliance.
Also in September, NIH designated people with disabilities as a population with health disparities that deserves further attention. This makes a new funding stream available and “should spur data collection that allows us to look with greater precision at the barriers and structural issues that have held people with disabilities back,” said Bonnielin Swenor, director of the Johns Hopkins University Disability Health Research Center.
One important barrier for older adults: Unlike younger adults with disabilities, many seniors with impairments don’t identify themselves as disabled.
“Before my mom died in October 2019, she became blind from macular degeneration and deaf from hereditary hearing loss. But she would never say she was disabled,” Iezzoni said.
Similarly, older adults who can’t walk after a stroke or because of severe osteoarthritis generally think of themselves as having a medical condition, not a disability.
Meanwhile, seniors haven’t been well integrated into the disability rights movement, which has been led by young and middle-aged adults. They typically don’t join disability-oriented communities that offer support from people with similar experiences. And they don’t ask for accommodations they might be entitled to under the ADA or the 1973 Rehabilitation Act.
Many seniors don’t even realize they have rights under these laws, Swenor said. “We need to think more inclusively about people with disabilities and ensure that older adults are fully included at this really important moment of change.”
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.
Subscribe to KFF Health News’ free Morning Briefing.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Water Water Everywhere, But Which Is Best To Drink?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Well Well Well…
In Tuesday’s newsletter, we asked you for your (health-related) opinion on drinking water—with the understanding that this may vary from place to place. We got the above-depicted, below-described, set of responses:
- About 65% said “Filtered is best”
- About 20% said “From the mains is best”
- About 8% said “Bottled is best”
- About 3% said “Distilled is best”
- About 3% said “Some other source is best”
Of those who said “some other source is best”, one clarified that their preferred source was well water.
So what does the science say?
Fluoridated water is bad for you: True or False?
False, assuming a normal level of consumption. Rather than take up more space today though, we’ll link to what we previously wrote on this topic:
You may be wondering: but what if my level of consumption is higher than normal?
Let’s quickly look at some stats:
- The maximum permitted safety level varies from place to place, but is (for example) 2mg/l in the US, 1.5mg/l in Canada & the UK.
- The minimum recommended amount also varies from place to place, but is (for example) 0.7mg/l in Canada and the US, and 1mg/l in the UK.
It doesn’t take grabbing a calculator to realize that if you drink twice as much water as someone else, then depending on where you are, water fluoridated to the minimum may give you more than the recommended maximum.
However… Those safety margins are set so much lower than the actual toxicity levels of fluoride, that it doesn’t make a difference.
For example: your writer here takes a medication that has the side effect of causing dryness of the mouth, and consequently she drinks at least 3l of water per day in a climate that could not be described as hot (except perhaps for about 2 weeks of the year). She weighs 72kg (that’s about 158 pounds), and the toxicity of fluoride (for ill symptoms, not death) is 0.2mg/kg. So, she’d need 14.4mg of fluoride, which even if the water fluoridation here were 2mg/l (it’s not; it’s lower here, but let’s go with the highest figure to make a point), would require drinking more than 7l of water faster than the body can process it.
For more about the numbers, check out:
Acute Fluoride Poisoning from a Public Water System
Bottled water is the best: True or False?
False, if we consider “best” to be “healthiest”, which in turn we consider to be “most nutrients, with highest safety”.
Bottled water generally does have higher levels of minerals than most local mains supply water does. That’s good!
But you know what else is generally has? Microplastics and nanoplastics. That’s bad!
We don’t like to be alarmist in tone; it’s not what we’re about here, but the stats on bottled water are simply not good; see:
We Are Such Stuff As Bottles Are Made Of
You may be wondering: “but what about bottled water that comes in glass bottles?”
Indeed, water that comes in glass bottles can be expected to have lower levels of plastic than water that comes in plastic bottles, for obvious reasons.
However, we invite you to consider how likely you believe it to be that the water wasn’t stored in plastic while being processed, shipped and stored, before being portioned into its final store-ready glass bottles for end-consumer use.
Distilled water is the best: True or False?
False, generally, with caveats:
Distilled water is surely the safest water anywhere, because you know that you’ve removed any nasties.
However, it’s also devoid of nutrients, because you also removed any minerals it contained. Indeed, if you use a still, you’ll be accustomed to the build-up of these minerals (generally simplified and referenced as “limescale”, but it’s a whole collection of minerals).
Furthermore, that loss of nutrients can be more than just a “something good is missing”, because having removed certain ions, that water could now potentially strip minerals from your teeth. In practice, however, you’d probably have to swill it excessively to cause this damage.
Nevertheless, if you have the misfortune of living somewhere like Flint, Michigan, then a water still may be a fair necessity of life. In other places, it can simply be useful to have in case of emergency, of course.
Here’s an example product on Amazon if you’d like to invest in a water still for such cases.
PS: distilled water is also tasteless, and is generally considered bad, tastewise, for making tea and coffee. So we really don’t recommend distilling your water unless you have a good reason to do so.
Filtered water is the best: True or False?
True for most people in most places.
Let’s put it this way: it can’t logically be worse than whatever source of water you put into it…
Provided you change the filter regularly, of course.
Otherwise, after overusing a filter, at best it won’t be working, and at worst it’ll be adding in bacteria that have multiplied in the filter over however long you left it there.
You may be wondering: can water filters remove microplastics, and can they remove minerals?
The answer in both cases is: sometimes.
- For microplastics it depends on the filter size and the microplastic size (see our previous article for details on that).
- For minerals, it depends on the filter type. Check out:
The H2O Chronicles | 5 Water Filters That Remove Minerals
One other thing to think about: while most water filtration jugs are made of PFAS-free BPA-free plastics for obvious reasons, for greater peace of mind, you might consider investing in a glass filtration jug, like this one ← this is just one example product on Amazon; by all means shop around and find one you like
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
When Science Brings Hope
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
There’s a lot of bad news out there at present, including in the field of healthcare. So as some measure of respite from that, here’s some good news from the world of health science, including some actionable things to do:
Run for your life! Or casually meander for your life; that’s fine too.
Those who enjoy the equivalent of an average of 160mins slow (3mph) walking per day also enjoy the greatest healthspan. Now, there may be an element of two-way causality here (moving more means we live longer, but also, sometimes people move less because of having crippling disabilities, which are themselves not great for healthspan, as well as having the knock-on effect of reducing movement, and so such conditions yield and anti-longevity double-whammy), but for any who are able to, increasing the amount of time per day spend moving, ultimately results (on average) in a lot of extra days in life that we’ll then get to spend moving.
Depending on how active or not you are already, every extra 1 hour walked could add two hours and 49 minutes to life expectancy:
Read in full: Americans over 40 could live extra 5 years if they were all as active as top 25% of population, modeling study suggests
Related: The Doctor Who Wants Us To Exercise Less & Move More
Re-teaching your brain to heal itself
Cancer is often difficult to treat, and brain tumors can be amongst the most difficult with which to contend. Not only is everything in there very delicate, but also it’s the hardest place in the body to get at—not just surgically, but even chemically, because of the blood-brain barrier. To make matters worse, brain tumors such as glioblastoma weaken the function of T-cells (whose job it is to eliminate the cancer) by prolonged exposure.
Research has found a way to restore the responsiveness of these T-cells to immune checkpoint inhibitors, allowing them to go about their cancer-killing activities unimpeded:
Read in full: New possibilities for treating intractable brain tumors unveiled
Related: 5 Ways To Beat Cancer (And Other Diseases)
Here’s to your good health!
GLP-1 receptor agonists, originally developed to fight diabetes and now enjoying popularity as weight loss adjuvants, work in large part by cutting down food cravings by interfering with the chemical messaging about such.
As a bonus, it seems that they also can reduce alcohol cravings, especially by targetting the brain’s reward center; this was based on a large review of studies looking at how GLP-1RA use affects alcohol use, alcohol-related health problems, hospital visits, and brain reactions to alcohol cues:
Read in full: Diabetes medication may be effective in helping people drink less alcohol, research finds
Related: How To Reduce Or Quit Alcohol
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: