How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.
The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.
Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.
How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.
What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.
From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Loaded Mocha Chocolate Parfait
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Packed with nutrients, including a healthy dose of protein and fiber, these parfait pots can be a healthy dessert, snack, or even breakfast!
You will need (for 4 servings)
For the mocha cream:
- ½ cup almond milk
- ½ cup raw cashews
- ⅓ cup espresso
- 2 tbsp maple syrup
- 1 tsp vanilla extract
For the chocolate sauce:
- 4 tbsp coconut oil, melted
- 2 tbsp unsweetened cocoa powder
- 1 tbsp maple syrup
- 1 tsp vanilla extract
For the other layers:
- 1 banana, sliced
- 1 cup granola, no added sugar
Garnish (optional): 3 coffee beans per serving
Note about the maple syrup: since its viscosity is similar to the overall viscosity of the mocha cream and chocolate sauce, you can adjust this per your tastes, without affecting the composition of the dish much besides sweetness (and sugar content). If you don’t like sweetness, the maple syrup be reduced or even omitted entirely (your writer here is known for her enjoyment of very strong bitter flavors and rarely wants anything sweeter than a banana); if you prefer more sweetness than the recipe called for, that’s your choice too.
Method
(we suggest you read everything at least once before doing anything)
1) Blend all the mocha cream ingredients. If you have time, doing this in advance and keeping it in the fridge for a few hours (or even up to a week) will make the flavor richer. But if you don’t have time, that’s fine too.
2) Stir all the chocolate sauce ingredients together in a small bowl, and set it aside. This one should definitely not be refrigerated, or else the coconut oil will solidify and separate itself.
3) Gently swirl the the mocha cream and chocolate sauce together. You want a marble effect, not a full mixing. Omit this step if you want clearer layers.
4) Assemble in dessert glasses, alternating layers of banana, mocha chocolate marble mixture (or the two parts, if you didn’t swirl them together), and granola.
5) Add the coffee-bean garnish, if using, and serve!
Enjoy!
Want to learn more?
For those interested in some of the science of what we have going on today:
- Enjoy Bitter Foods For Your Heart & Brain
- The Bitter Truth About Coffee (Or Is It?)
- Which Sugars Are Healthier, And Which Are Just The Same?
- Cashew Nuts vs Coconut – Which is Healthier?
Take care!
Share This Post
Successful Aging – by Dr. Daniel Levitin
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
We all know about age-related cognitive decline. What if there’s a flipside, though?
Neuroscientist Dr. Daniel Levitin explores the changes that the brain undergoes with age, and notes that it’s not all downhill.
From cumulative improvements in the hippocampi to a dialling-down of the (often overfunctioning) amygdalae, there are benefits too.
The book examines the things that shape our brains from childhood into our eighties and beyond. Many milestones may be behind us, but neuroplasticity means there’s always time for rewiring. Yes, it also covers the “how”.
We learn also about the neurogenesis promoted by such simple acts as taking a different route and/or going somewhere new, and what other things improve the brain’s healthspan.
The writing style is very accessible “pop-science”, and is focused on being of practical use to the reader.
Bottom line: if you want to get the most out of your aging wizening brain, this book is a great how-to manual.
Click here to check out Successful Aging and level up your later years!
Share This Post
Seriously Useful Communication Skills!
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
What Are Communication Skills, Really?
Superficially, communication is “conveying an idea to someone else”. But then again…
Superficially, painting is “covering some kind of surface in paint”, and yet, for some reason, the ceiling you painted at home is not regarded as equally “good painting skills” as Michaelangelo’s, with regard to the ceiling of the Sistine Chapel.
All kinds of “Dark Psychology” enthusiasts on YouTube, authors of “Office Machiavelli” handbooks, etc, tell us that good communication skills are really a matter of persuasive speaking (or writing). And let’s not even get started on “pick-up artist” guides. Bleugh.
Not to get too philosophical, but here at 10almonds, we think that having good communication skills means being able to communicate ideas simply and clearly, and in a way that will benefit as many people as possible.
The implications of this for education are obvious, but what of other situations?
Conflict Resolution
Whether at work or at home or amongst friends or out in public, conflict will happen at some point. Even the most well-intentioned and conscientious partners, family, friends, colleagues, will eventually tread on our toes—or we, on theirs. Often because of misunderstandings, so much precious time will be lost needlessly. It’s good for neither schedule nor soul.
So, how to fix those situations?
I’m OK; You’re OK
In the category of “bestselling books that should have been an article at most”, a top-tier candidate is Thomas Harris’s “I’m OK; You’re OK”.
The (very good) premise of this (rather padded) book is that when seeking to resolve a conflict or potential conflict, we should look for a win-win:
- I’m not OK; you’re not OK ❌
- For example: “Yes, I screwed up and did this bad thing, but you too do bad things all the time”
- I’m OK; you’re not OK ❌
- For example: “It is not I who screwed up; this is actually all your fault”
- I’m not OK; you’re OK ❌
- For example: “I screwed up and am utterly beyond redemption; you should immediately divorce/disown/dismiss/defenestrate me”
- I’m OK; you’re OK ✅
- For example: “I did do this thing which turned out to be incorrect; in my defence it was because you said xyz, but I can understand why you said that, because…” and generally finding a win-win outcome.
So far, so simple.
“I”-Messages
In a conflict, it’s easy to get caught up in “you did this, you did that”, often rushing to assumptions about intent or meaning. And, the closer we are to the person in question, the more emotionally charged, and the more likely we are to do this as a knee-jerk response.
“How could you treat me this way?!” if we are talking to our spouse in a heated moment, perhaps, or “How can you treat a customer this way?!” if it’s a worker at Home Depot.
But the reality is that almost certainly neither our spouse nor the worker wanted to upset us.
Going on the attack will merely put them on the defensive, and they may even launch their own counterattack. It’s not good for anyone.
Instead, what really happened? Express it starting with the word “I”, rather than immediately putting it on the other person. Often our emotions require a little interrogation before they’ll tell us the truth, but it may be something like:
“I expected x, so when you did/said y instead, I was confused and hurt/frustrated/angry/etc”
Bonus: if your partner also understands this kind of communication situation, so much the better! Dark psychology be damned, everything is best when everyone knows the playbook and everyone is seeking the best outcome for all sides.
The Most Powerful “I”-Message Of All
Statements that start with “I” will, unless you are rules-lawyering in bad faith, tend to be less aggressive and thus prompt less defensiveness. An important tool for the toolbox, is:
“I need…”
Softly spoken, firmly if necessary, but gentle. If you do not express your needs, how can you expect anyone to fulfil them? Be that person a partner or a retail worker or anyone else. Probably they want to end the conflict too, so throw them a life-ring and they will (if they can, and are at least halfway sensible) grab it.
- “I need an apology”
- “I need a moment to cool down”
- “I need a refund”
- “I need some reassurance about…” (and detail)
Help the other person to help you!
Everything’s best when it’s you (plural) vs the problem, rather than you (plural) vs each other.
Apology Checklist
Does anyone else remember being forced to write an insincere letter of apology as a child, and the literary disaster that probably followed? As adults, we (hopefully) apologize when and if we mean it, and we want our apology to convey that.
What follows will seem very formal, but honestly, we recommend it in personal life as much as professional. It’s a ten-step apology, and you will forget these steps, so we recommend to copy and paste them into a Notes app or something, because this is of immeasurable value.
It’s good not just for when you want to apologize, but also, for when it’s you who needs an apology and needs to feel it’s sincere. Give your partner (if applicable) a copy of the checklist too!
- Statement of apology—say “I’m sorry”
- Name the offense—say what you did wrong
- Take responsibility for the offense—understand your part in the problem
- Attempt to explain the offense (not to excuse it)—how did it happen and why
- Convey emotions; show remorse
- Address the emotions/damage to the other person—show that you understand or even ask them how it affected them
- Admit fault—understand that you got it wrong and like other human beings you make mistakes
- Promise to be better—let them realize you’re trying to change
- Tell them how you will try to do it different next time and finally
- Request acceptance of the apology
Note: just because you request acceptance of the apology doesn’t mean they must give it. Maybe they won’t, or maybe they need time first. If they’re playing from this same playbook, they might say “I need some time to process this first” or such.
Want to really superpower your relationship? Read this together with your partner:
Hold Me Tight: Seven Conversations for a Lifetime of Love, and, as a bonus:
The Hold Me Tight Workbook: A Couple’s Guide for a Lifetime of Love
Share This Post
- I’m not OK; you’re not OK ❌
Related Posts
11 Things That Can Change Your Eye Color
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Eye color is generally considered so static that iris scans are considered a reasonable security method. However, it can indeed change—mostly for reasons you won’t want, though:
Ringing the changes
Putting aside any wishes of being a manga protagonist with violet eyes, here are the self-changing options:
- Aging in babies: babies are often born with lighter eyes, which can darken as melanocytes develop during the first few months of life. This is similar to how a small child’s blonde hair can often be much darker by the time puberty hits!
- Aging in adults: eyes may continue to darken until adulthood, while aging into the elderly years can cause them to lighten due to conditions like arcus senilis
- Horner’s syndrome: a nerve disorder that can cause the eyes to become lighter due to loss of pigment
- Fuchs heterochromic iridocyclitis: an inflammation of the iris that leads to lighter eyes over time
- Pigment dispersion syndrome: the iris rubs against eye fibers, leading to pigment loss and lighter eyes
- Kayser-Fleischer rings: excess copper deposits on the cornea, often due to Wilson’s disease, causing larger-than-usual brown or grayish rings around the iris
- Iris melanoma: a rare cancer that can darken the iris, often presenting as brown spots
- Cancer treatments: chemotherapy for retinoblastoma in children can result in lighter eye color and heterochromia
- Medications: prostaglandin-based glaucoma treatments can darken the iris, with up to 23% of patients seeing this effect
- Vitiligo: an autoimmune disorder that destroys melanocytes, mostly noticed in the skin, but also causing patchy loss of pigment in the iris
- Emotional and pupil size changes: emotions and trauma can affect pupil size, making eyes appear darker or lighter temporarily by altering how much of the iris is visible
For more about all these, and some notes about more voluntary changes (if you have certain kinds of eye surgery), enjoy:
Click Here If The Embedded Video Doesn’t Load Automatically!
Want to learn more?
You might also like to read:
Understanding And Slowing The Progression Of Cataracts
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Meditations for Mortals – by Oliver Burkeman
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
We previously reviewed this author’s “Four Thousand Weeks”, but for those who might have used a lot of those four thousand weeks already, and would like to consider things within a smaller timeframe for now, this work is a 28-day daily reader.
Now, daily readers are usually 366 days, but the chapters here are not the single page chapters that 366-page daily readers usually have. So, expect to invest a little more time per day (say, about 6 pages for each daily chapter).
Burkeman does not start the way we might expect, by telling us to take the time to smell the roses. Instead, he starts by examining the mistakes that most of us make most of the time, often due to unexamined assumptions about the world and how it works. Simply put, we’ve often received bad lessons in life (usually not explicitly, but rather, from our environments), and it takes some unpacking first to deal with that.
Nor is the book systems-based, as many books that get filed under “time management” may be, but rather, is simply principles-based. This is a strength, because principles are a lot easier to keep to than systems.
The writing style is direct and conversational, and neither overly familiar nor overly academic. It strikes a very comfortably readable balance.
Bottom line: if you’d like to get the most out of your days, this book can definitely help improve things a lot.
Click here to check out Meditations For Mortals, and live fulfilling days!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
A person in Texas caught bird flu after mixing with dairy cattle. Should we be worried?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
The United States’ Centers for Disease Control and Prevention (CDC) has issued a health alert after the first case of H5N1 avian influenza, or bird flu, seemingly spread from a cow to a human.
A farm worker in Texas contracted the virus amid an outbreak in dairy cattle. This is the second human case in the US; a poultry worker tested positive in Colorado in 2022.
The virus strain identified in the Texan farm worker is not readily transmissible between humans and therefore not a pandemic threat. But it’s a significant development nonetheless.
Some background on bird flu
There are two types of avian influenza: highly pathogenic or low pathogenic, based on the level of disease the strain causes in birds. H5N1 is a highly pathogenic avian influenza.
H5N1 first emerged in 1997 in Hong Kong and then China in 2003, spreading through wild bird migration and poultry trading. It has caused periodic epidemics in poultry farms, with occasional human cases.
Influenza A viruses such as H5N1 are further divided into variants, called clades. The unique variant causing the current epidemic is H5N1 clade 2.3.4.4b, which emerged in late 2020 and is now widespread globally, especially in the Americas.
In the past, outbreaks could be controlled by culling of infected birds, and H5N1 would die down for a while. But this has become increasingly difficult due to escalating outbreaks since 2021.
Wild animals are now in the mix
Waterfowl (ducks, swans and geese) are the main global spreaders of avian flu, as they migrate across the world via specific routes that bypass Australia. The main hub for waterfowl to migrate around the world is Quinghai lake in China.
But there’s been an increasing number of infected non-waterfowl birds, such as true thrushes and raptors, which use different flyways. Worryingly, the infection has spread to Antarctica too, which means Australia is now at risk from different bird species which fly here.
H5N1 has escalated in an unprecedented fashion since 2021, and an increasing number of mammals including sea lions, goats, red foxes, coyotes, even domestic dogs and cats have become infected around the world.
Wild animals like red foxes which live in peri-urban areas are a possible new route of spread to farms, domestic pets and humans.
Dairy cows and goats have now become infected with H5N1 in at least 17 farms across seven US states.
What are the symptoms?
Globally, there have been 14 cases of H5N1 clade 2.3.4.4b virus in humans, and 889 H5N1 human cases overall since 2003.
Previous human cases have presented with a severe respiratory illness, but H5N1 2.3.4.4b is causing illness affecting other organs too, like the brain, eyes and liver.
For example, more recent cases have developed neurological complications including seizures, organ failure and stroke. It’s been estimated that around half of people infected with H5N1 will die.
The case in the Texan farm worker appears to be mild. This person presented with conjunctivitis, which is unusual.
Food safety
Contact with sick poultry is a key risk factor for human infection. Likewise, the farm worker in Texas was likely in close contact with the infected cattle.
The CDC advises pasteurised milk and well cooked eggs are safe. However, handling of infected meat or eggs in the process of cooking, or drinking unpasteurised milk, may pose a risk.
Although there’s no H5N1 in Australian poultry or cattle, hygienic food practices are always a good idea, as raw milk or poorly cooked meat, eggs or poultry can be contaminated with microbes such as salmonella and E Coli.
If it’s not a pandemic, why are we worried?
Scientists have feared avian influenza may cause a pandemic since about 2005. Avian flu viruses don’t easily spread in humans. But if an avian virus mutates to spread in humans, it can cause a pandemic.
One concern is if birds were to infect an animal like a pig, this acts as a genetic mixing vessel. In areas where humans and livestock exist in close proximity, for example farms, markets or even in homes with backyard poultry, the probability of bird and human flu strains mixing and mutating to cause a new pandemic strain is higher.
The cows infected in Texas were tested because farmers noticed they were producing less milk. If beef cattle are similarly affected, it may not be as easily identified, and the economic loss to farmers may be a disincentive to test or report infections.
How can we prevent a pandemic?
For now there is no spread of H5N1 between humans, so there’s no immediate risk of a pandemic.
However, we now have unprecedented and persistent infection with H5N1 clade 2.3.4.4b in farms, wild animals and a wider range of wild birds than ever before, creating more chances for H5N1 to mutate and cause a pandemic.
Unlike the previous epidemiology of avian flu, where hot spots were in Asia, the new hot spots (and likely sites of emergence of a pandemic) are in the Americas, Europe or in Africa.
Pandemics grow exponentially, so early warnings for animal and human outbreaks are crucial. We can monitor infections using surveillance tools such as our EPIWATCH platform.
The earlier epidemics can be detected, the better the chance of stamping them out and rapidly developing vaccines.
Although there is a vaccine for birds, it has been largely avoided until recently because it’s only partially effective and can mask outbreaks. But it’s no longer feasible to control an outbreak by culling infected birds, so some countries like France began vaccinating poultry in 2023.
For humans, seasonal flu vaccines may provide a small amount of cross-protection, but for the best protection, vaccines need to be matched exactly to the pandemic strain, and this takes time. The 2009 flu pandemic started in May in Australia, but the vaccines were available in September, after the pandemic peak.
To reduce the risk of a pandemic, we must identify how H5N1 is spreading to so many mammalian species, what new wild bird pathways pose a risk, and monitor for early signs of outbreaks and illness in animals, birds and humans. Economic compensation for farmers is also crucial to ensure we detect all outbreaks and avoid compromising the food supply.
C Raina MacIntyre, Professor of Global Biosecurity, NHMRC L3 Research Fellow, Head, Biosecurity Program, Kirby Institute, UNSW Sydney; Ashley Quigley, Senior Research Associate, Global Biosecurity, UNSW Sydney; Haley Stone, PhD Candidate, Biosecurity Program, Kirby Institute, UNSW Sydney; Matthew Scotch, Associate Dean of Research and Professor of Biomedical Informatics, College of Health Solutions, Arizona State University, and Rebecca Dawson, Research Associate, The Kirby Institute, UNSW Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: