The Origin of Everyday Moods – by Dr. Robert Thayer
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
First of all, what does this title mean by “everyday moods”? By this the author is referring to the kinds of moods we have just as a matter of the general wear-and-tear of everyday life—not the kind that come from major mood disorders and/or serious trauma.
The latter kinds of mood take less explaining, in any case. Dr. Thayer, therefore, spends his time on the less obvious ones—which in turn are the ones that affect most of the most, every day.
Critical to Dr. Thayer’s approach is the mapping of moods by four main quadrants:
- High energy, high tension
- High energy, low tension
- Low energy, high tension
- Low energy, low tension
…though this can be further divided into 25 sectors, if we rate each variable on a scale of 0–4. But for the first treatment, it suffices to look at whether energy and tension are high or low, respectively, and which we’d like to have more or less of.
Then (here be science) how to go about achieving that in the most efficient, evidence-based ways. So, it’s not just a theoretical book; it has great practical value too.
The style of the book is accessible, and walks a fine line between pop-science and hard science, which makes it a great book for laypersons and academics alike.
Bottom line: if you’d like the cheat codes to improve your moods and lessen the impact of bad ones, this is the book for you.
Click here to check out The Origin of Everyday Moods, and manage yours!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
The Biological Mind – by Dr. Alan Jasanoff
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
How special is our brain? According to Dr. Alan Jasanoff, it’s not nearly as special as we think it is.
In this work, he outlines the case for how we have collectively overstated the brain’s importance. That it’s just another organ like a heart or a kidney, and that who we are is as much a matter of other factors, as what goes on in our brain.
In this reviewer’s opinion, he overcorrects a bit. The heart and kidneys are very simple organs, as organs go. The brain is not. And while everything from our gut microbiota to our environment to our hormones may indeed contribute to what is us, our brain is one thing that can’t just be swapped out.
Nevertheless, this very well-written book can teach us a lot about everything else that makes us us, including many biological factors that many people don’t know about or consider.
Towards the end of the book, he switches into futurist speculation, and his speculation can be summed up as “we cannot achieve anything worthwhile in the future”.
Bottom line: if you’ve an interest in such things as how transplanting glial cells can give a 30% cognitive enhancement, and how a brain transplant wouldn’t result in the same us in a different body, this is the book for you.
Click here to check out The Biological Mind, and learn about yours!
Share This Post
-
Horse Sedative Use Among Humans Spreads in Deadly Mixture of ‘Tranq’ and Fentanyl
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
TREASURE ISLAND, Fla. — Andrew McClave Jr. loved to lift weights. The 6-foot-4-inch bartender resembled a bodybuilder and once posed for a photo flexing his muscles with former pro wrestler Hulk Hogan.
“He was extremely dedicated to it,” said his father, Andrew McClave Sr., “to the point where it was almost like he missed his medication if he didn’t go.”
But the hobby took its toll. According to a police report, a friend told the Treasure Island Police Department that McClave, 36, suffered from back problems and took unprescribed pills to reduce the pain.
In late 2022, the friend discovered McClave in bed. He had no pulse. A medical examiner determined he had a fatal amount of fentanyl, cocaine, and xylazine, a veterinary tranquilizer used to sedate horses, in his system, an autopsy report said. Heart disease was listed as a contributing factor.
McClave is among more than 260 people across Florida who died in one year from accidental overdoses involving xylazine, according to a Tampa Bay Times analysis of medical examiner data from 2022, the first year state officials began tracking the substance. Numbers for 2023 haven’t been published.
The death toll reflects xylazine’s spread into the nation’s illicit drug supply. Federal regulators approved the tranquilizer for animals in the early 1970s and it’s used to sedate horses for procedures like oral exams and colic treatment, said Todd Holbrook, an equine medicine specialist at the University of Florida. Reports of people using xylazine emerged in Philadelphia, then the drug spread south and west.
What’s not clear is exactly what role the sedative plays in overdose deaths, because the Florida data shows no one fatally overdosed on xylazine alone. The painkiller fentanyl was partly to blame in all but two cases in which the veterinary drug was included as a cause of death, according to the Times analysis. Cocaine or alcohol played roles in the cases in which fentanyl was not involved.
Fentanyl is generally the “800-pound gorilla,” according to Lewis Nelson, chair of the emergency medicine department at Rutgers New Jersey Medical School, and xylazine may increase the risk of overdose, though not substantially.
But xylazine appears to complicate the response to opioid overdoses when they do happen and makes it harder to save people. Xylazine can slow breathing to dangerous levels, according to federal health officials, and it doesn’t respond to the overdose reversal drug naloxone, often known by the brand name Narcan. Part of the problem is that many people may not know they are taking the horse tranquilizer when they use other drugs, so they aren’t aware of the additional risks.
Lawmakers in Tallahassee made xylazine a Schedule 1 drug like heroin or ecstasy in 2016, and several other states including Pennsylvania, Ohio, and West Virginia have taken action to classify it as a scheduled substance, too. But it’s not prohibited at the federal level. Legislation pending in Congress would criminalize illicit xylazine use nationwide.
The White House in April designated the combination of fentanyl and xylazine, often called “tranq dope,” as an emerging drug threat. A study of 20 states and Washington, D.C., found that overdose deaths attributed to both illicit fentanyl and xylazine exploded from January 2019 to June 2022, jumping from 12 a month to 188.
“We really need to continue to be proactive,” said Amanda Bonham-Lovett, program director of a syringe exchange in St. Petersburg, “and not wait until this is a bigger issue.”
‘A Good Business Model’
There are few definitive answers about why xylazine use has spread — and its impact on people who consume it.
The U.S. Drug Enforcement Administration in September said the tranquilizer is entering the country in several ways, including from China and in fentanyl brought across the southwestern border. The Florida attorney general’s office is prosecuting an Orange County drug trafficking case that involves xylazine from a New Jersey supplier.
Bonham-Lovett, who runs IDEA Exchange Pinellas, the county’s anonymous needle exchange, said some local residents who use drugs are not seeking out xylazine — and don’t know they’re consuming it.
One theory is that dealers are mixing xylazine into fentanyl because it’s cheap and also affects the brain, Nelson said.
“It’s conceivable that if you add a psychoactive agent to the fentanyl, you can put less fentanyl in and still get the same kick,” he said. “It’s a good business model.”
In Florida, men accounted for three-quarters of fatal overdoses involving xylazine, according to the Times analysis. Almost 80% of those who died were white. The median age was 42.
Counties on Florida’s eastern coast saw the highest death tolls. Duval County topped the list with 46 overdoses. Tampa Bay recorded 19 fatalities.
Cocaine was also a cause in more than 80 cases, including McClave’s, the Times found. The DEA in 2018 warned of cocaine laced with fentanyl in Florida.
In McClave’s case, Treasure Island police found what appeared to be marijuana and a small plastic bag with white residue in his room, according to a police report. His family still questions how he took the powerful drugs and is grappling with his death.
He was an avid fisherman, catching snook and grouper in the Gulf of Mexico, said his sister, Ashley McClave. He dreamed of being a charter boat captain.
“I feel like I’ve lost everything,” his sister said. “My son won’t be able to learn how to fish from his uncle.”
Mysterious Wounds
Another vexing challenge for health officials is the link between chronic xylazine use and open wounds.
The wounds are showing up across Tampa Bay, needle exchange leaders said. The telltale sign is blackened, crusty tissue, Bonham-Lovett said. Though the injuries may start small — the size of a dime — they can grow and “take over someone’s whole limb,” she said.
Even those who snort fentanyl, instead of injecting it, can develop them. The phenomenon is unexplained, Nelson said, and is not seen in animals.
IDEA Exchange Pinellas has recorded at least 10 cases since opening last February, Bonham-Lovett said, and has a successful treatment plan. Staffers wash the wounds with soap and water, then dress them.
One person required hospitalization partly due to xylazine’s effects, Bonham-Lovett said. A 31-year-old St. Petersburg woman, who asked not to be named due to concerns over her safety and the stigma of drug use, said she was admitted to St. Anthony’s Hospital in 2023. The woman, who said she uses fentanyl daily, had a years-long staph infection resistant to some antibiotics, and a wound recently spread across half her thigh.
The woman hadn’t heard of xylazine until IDEA Exchange Pinellas told her about the drug. She’s thankful she found out in time to get care.
“I probably would have lost my leg,” she said.
This article was produced in partnership with the Tampa Bay Times.
KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.
Subscribe to KFF Health News’ free Morning Briefing.
Share This Post
-
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Share This Post
Related Posts
-
Fight Inflammation & Protect Your Brain, With Quercetin
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Querying Quercetin
Quercetin is a flavonoid (and thus, antioxidant) pigment found in many plants. Capers, radishes, and coriander/cilantro score highly, but the list is large:
USDA Database for the Flavonoid Content of Selected Foods
Indeed,
❝Their regular consumption is associated with reduced risk of a number of chronic diseases, including cancer, cardiovascular disease (CVD) and neurodegenerative disorders❞
~ Dr. Aleksandra Kozłpwsla & Dr. Dorota Szostak-Wegierek
Read more: Flavonoids—food sources and health benefits
For this reason, quercetin is often sold/consumed as a supplement on the strength of its health-giving properties.
But what does the science say?
Quercetin and inflammation
In short, it helps:
❝500 mg per day quercetin supplementation for 8 weeks resulted in significant improvements in clinical symptoms, disease activity, hs-TNFα, and Health Assessment Questionnaire scores in women with rheumatoid athritis❞
Quercetin and blood pressure
It works, if antihypertensive (i.e., blood pressure lowering) effect is what you want/need:
❝…significant effect of quercetin supplementation in the reduction of BP, possibly limited to, or greater with dosages of >500 mg/day.❞
~ Dr. Maria-Corina Serban et al.
Quercetin and diabetes
We’re less confident to claim this one, because (almost?) all of the research so far as been in non-human animals or in vitro. As one team of researchers put it:
❝Despite the wealth of in animal research results suggesting the anti-diabetic and its complications potential of quercetin, its efficacy in diabetic human subjects is yet to be explored❞
Quercetin and neuroprotection
Research has been done into the effect of quercetin on the risk of Parkinson’s disease and Alzheimer’s disease, and they found…
❝The data indicate that quercetin is the major neuroprotective component in coffee against Parkinson’s disease and Alzheimer’s disease❞
Read more: Quercetin, not caffeine, is a major neuroprotective component in coffee
Summary
Quercetin is a wonderful flavonoid that can be enjoyed as part of one’s diet and by supplementation. In terms of its popular health claims:
- It has been found very effective for lowering inflammation
- It has a moderate blood pressure lowering effect
- It may have anti-diabetes potential, but the science is young
- It has been found to have a potent neuroprotective effect
Want to get some?
We don’t sell it, but for your convenience, here’s an example product on Amazon
Enjoy!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Do We Need Sunscreen In Winter, Really?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
It’s Q&A Day at 10almonds!
Have a question or a request? We love to hear from you!
In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!
As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!
So, no question/request too big or small 😎
❝I keep seeing advice that we shoudl wear sunscreen out in winter even if it’s not hot or sunny, but is there actually any real benefit to this?❞
Short answer: yes (but it’s indeed not as critical as it is during summer’s hot/sunny days)
Longer answer: first, let’s examine the physics of summer vs winter when it comes to the sun…
In summer (assuming we live far enough from the equator to have this kind of seasonal variation), the part of the planet where we live is tilted more towards the sun. This makes it closer, and more importantly, it’s more directly overhead during the day. The difference in distance through space isn’t as big a deal as the difference in distance through the atmosphere. When the sun is more directly overhead, its rays have a shorter path through our atmosphere, and thus less chance of being blocked by cloud cover / refracted elsewhere / bounced back off into space before it even gets that far.
In winter, the opposite of all that is true.
Morning/evening also somewhat replicate this compared to midday, because the sun being lower in the sky has a similar effect to seasonal variation causing it to be less directly overhead.
For this reason, even though visually the sun may be just as bright on a winter morning as it is on a summer midday, the rays have been filtered very differently by the time they get to us.
This is one reason why you’re much less likely to get sunburned in the winter, compared to the summer (others include the actual temperature difference, your likely better hydration, and your likely more modest attire protecting you).
However…
The reason it is advisable to wear sunscreen in winter is not generally about sunburn, and is rather more about long-term cumulative skin damage (ranging from accelerated aging to cancer) caused by the UV rays—specifically, mostly UVA rays, since UVB rays (with their higher energy but shorter wavelength) have nearly all been blocked by the atmosphere.
Here’s a good explainer of that from the American Cancer Society:
UV (Ultraviolet) Radiation and Cancer Risk
👆 this may seem like a no-brainer, but there’s a lot explained here that demystifies a lot of things, covering ionizing vs non-ionizing radiation, x-rays and gamma-rays, the very different kinds of cancer caused by different things, and what things are dangerous vs which there’s no need to worry about (so far as best current science can say, at least).
Consequently: yes, if you value your skin health and avoidance of cancer, wearing sunscreen when out even in the winter is a good idea. Especially if your phone’s weather app says the UV index is “moderate” or above, but even if it’s “low”, it doesn’t hurt to include it as part of your skincare routine.
But what if sunscreens are dangerous?
Firstly, not all sunscreens are created equal:
Learn more: Who Screens The Sunscreens?
Secondly: consider putting on a protective layer of moisturizer first, and then the sunscreen on top. Bear in mind, this is winter we’re talking about, so you’re probably not going out in a bikini, so this is likely a face-neck-hands job and you’re done.
What about vitamin D?
Humans evolved to have more or less melanin in our skin depending on where we lived, and white people evolved to wring the most vitamin D possible out of the meagre sun far from the equator. Black people’s greater melanin, on the other hand, offers some initial protection against the sun (but any resultant skin cancer is then more dangerous than it would be for white people if it does occur, so please do use sunscreen whatever your skintone).
Nowadays many people live in many places which may or may not be the places we evolved for, and so we have to take that into account when it comes to sun exposure.
Here’s a deeper dive into that, for those who want to learn:
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Healthy Harissa Falafel Patties
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
You can make these as regular falafel balls if you prefer, but patties are quicker and easier to cook, and are great for popping in a pitta.
You will need
For the falafels:
- 1 can chickpeas, drained, keep the chickpea water (aquafaba)
- 1 red onion, roughly chopped
- 2 tbsp chickpea flour (also called gram flour or garbanzo bean flour)
- 1 bunch parsley
- 1 tbsp harissa paste
- Extra virgin olive oil for frying
For the harissa sauce:
- ½ cup crème fraîche or plant-based equivalent (you can use our Plant-Based Healthy Cream Cheese recipe and add the juice of 1 lemon)*
- 1 tbsp harissa paste (or adjust this quantity per your heat preference)
*if doing this, rather than waste the zest of the lemon, you can add the zest to the falafels if you like, but it’s by no means necessary, just an option
For serving:
- Wholegrain pitta or other flatbread (you can use our Healthy Homemade Flatbreads recipe)
- Salad (your preference; we recommend some salad leaves, sliced tomato, sliced cucumber, maybe some sliced onion, that sort of thing)
Method
(we suggest you read everything at least once before doing anything)
1) Blend the chickpeas, 1 oz of the aquafaba, the onion, the parsley, and the harissa paste, until smooth. Then add in the chickpea flour until you get a thick batter. If you overdo it with the chickpea flour, add a little more of the aquafaba to equalize. Refrigerate the mixture for at least 30 minutes.
2) Heat some oil in a skillet, and spoon the falafel mixture into the pan to make the patties, cooking on both sides (you can use a spatula to gently turn them), and set them aside.
3) Mix the harissa sauce ingredients in a small bowl.
4) Assemble; best served warm, but enjoy it however you like!
Enjoy!
Want to learn more?
For those interested in more of what we have going on today:
- Why You’re Probably Not Getting Enough Fiber (And How To Fix It)
- Capsaicin For Weight Loss And Against Inflammation
- Hero Homemade Hummus ← another great option
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: