Hearing voices is common and can be distressing. Virtual reality might help us meet and ‘treat’ them
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Have you ever heard something that others cannot – such as your name being called? Hearing voices or other noises that aren’t there is very common. About 10% of people report experiencing auditory hallucinations at some point in their life.
The experience of hearing voices can be very different from person to person, and can change over time. They might be the voice of someone familiar or unknown. There might be many voices, or just one or two. They can be loud or quiet like a whisper.
For some people these experiences are positive. They might represent a spiritual or supernatural experience they welcome or a comforting presence. But for others these experiences are distressing. Voices can be intrusive, negative, critical or threatening. Difficult voices can make a person feel worried, frightened, embarrassed or frustrated. They can also make it hard to concentrate, be around other people and get in the way of day-to-day activities.
Although not everyone who hears voices has a mental health problem, these experiences are much more common in people who do. They have been considered a hallmark symptom of schizophrenia, which affects about 24 million people worldwide.
However, such experiences are also common in other mental health problems, particularly in mood- and trauma-related disorders (such as bipolar disorder or depression and post-traumatic stress disorder) where as many as half of people may experience them.
Why do people hear voices?
It is unclear exactly why people hear voices but exposure to prolonged stress, trauma or depression can increase the chances.
Some research suggests people who hear voices might have brains that are “wired” differently, particularly between the hearing and speaking parts of the brain. This may mean parts of our inner speech can be experienced as external voices. So, having the thought “you are useless” when something goes wrong might be experienced as an external person speaking the words.
Other research suggests it may relate to how our brains use past experiences as a template to make sense of and make predictions about the world. Sometimes those templates can be so strong they lead to errors in how we experience what is going on around us, including hearing things our brain is “expecting” rather than what is really happening.
What is clear is that when people tell us they are hearing voices, they really are! Their brain perceives voice experiences as if someone were talking in the room. We could think of this “mistake” as working a bit like being susceptible to common optical tricks or visual illusions.
Coping with hearing voices
When hearing voices is getting in the way of life, treatment guidelines recommend the use of medications. But roughly a third of people will experience ongoing distress. As such, treatment guidelines also recommend the use of psychological therapies such as cognitive behavioural therapy.
The next generation of psychological therapies are beginning to use digital technologies and virtual reality offers a promising new medium.
Avatar therapy allows a person to create a virtual representation of the voice or voices, which looks and sounds like what they are experiencing. This can help people regain power in the “relationship” as they interact with the voice character, supported by a therapist.
Jason’s experience
Aged 53, Jason (not his real name) had struggled with persistent voices since his early 20s. Antipsychotic medication had helped him to some extent over the years, but he was still living with distressing voices. Jason tried out avatar therapy as part of a research trial.
He was initially unable to stand up to the voices, but he slowly gained confidence and tested out different ways of responding to the avatar and voices with his therapist’s support.
Jason became more able to set boundaries, such as not listening to them for periods throughout the day. He also felt more able to challenge what they said and make his own choices.
Over a couple of months, Jason started to experience some breaks from the voices each day and his relationship with them started to change. They were no longer like bullies, but more like critical friends pointing out things he could consider or be aware of.
Gaining recognition
Following promising results overseas and its recommendation by the United Kingdom’s National Institute for Health and Care Excellence, our team has begun adapting the therapy for an Australian context.
We are trialling delivering avatar therapy from our specialist voices clinic via telehealth. We are also testing whether avatar therapy is more effective than the current standard therapy for hearing voices, based on cognitive behavioural therapy.
As only a minority of people with psychosis receive specialist psychological therapy for hearing voices, we hope our trial will support scaling up these new treatments to be available more routinely across the country.
We would like to acknowledge the advice and input of Dr Nadine Keen (consultant clinical psychologist at South London and Maudsley NHS Foundation Trust, UK) on this article.
Leila Jameel, Trial Co-ordinator and Research Therapist, Swinburne University of Technology; Imogen Bell, Senior Research Fellow and Psychologist, The University of Melbourne; Neil Thomas, Professor of Clinical Psychology, Swinburne University of Technology, and Rachel Brand, Senior Lecturer in Clinical Psychology, University of the Sunshine Coast
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Is ADHD Being Over-Diagnosed For Cash?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Is ADHD Being Systematically Overdiagnosed?
The BBC’s investigative “Panorama” program all so recently did a documentary in which one of their journalists—who does not have ADHD—went to three private clinics and got an ADHD diagnosis from each of them:
- The BBC documentary: Private ADHD Clinics Exposed (28 mins)
- Their “5 Minutes” version: ADHD Undercover: How I Was Misdiagnosed (6 mins)
So… Is it really a case of show up, pay up, and get a shiny new diagnosis?
The BBC Panorama producers cherry-picked 3 private providers, and during those clinical assessments, their journalist provided answers that would certainly lead to a diagnosis.
This was contrasted against a three-hour assessment with an NHS psychiatrist—something that rarely happens in the NHS. Which prompts the question…
How did he walk into a 3-hour psychiatrist assessment, when most people have to wait in long waiting lists for a much more cursory appointment first with assorted gatekeepers, before going on another long waiting list, for an also-much-shorter appointment with a psychiatrist?
That would be because the NHS psychiatrist was given advance notification that this was part of an investigation and would be filmed (the private clinics were not gifted the same transparency)
So, maybe just a tad unequal treatment!
In case you’re wondering, here’s what that very NHS psychiatrist had to say on the topic:
Is it really too easy to be diagnosed with ADHD?
(we’ll give you a hint—remember Betteridge’s Law!)
❝Since the documentary aired, I have heard from people concerned that GPs could now be more likely to question legitimate diagnoses.
But as an NHS psychiatrist it is clear to me that the root of this issue is not overdiagnosis.
Instead, we are facing the combined challenges of remedying decades of underdiagnosis and NHS services that were set up when there was little awareness of ADHD.❞
~ Dr. Mike Smith, Psychiatrist
The ADHD foundation, meanwhile, has issued its own response, saying:
❝We are disappointed that BBC Panorama has opted to broadcast a poorly researched, sensationalist piece of television journalism.❞
Share This Post
The Joy of Saying No – by Natalie Lue
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Superficially, this seems an odd topic for an entire book. “Just say no”, after all, surely! But it’s not so simple as that, is it?
Lue looks into what underpins people-pleasing, first. Then, she breaks it down into five distinct styles of people-pleasing that each come from slightly different motivations and ways of perceiving how we interact with those around us.
Lest this seem overly complicated, those five styles are what she calls: gooding, efforting, avoiding, saving, suffering.
She then looks out how to have a healthier relationship with our yes/no decisions; first by observing, then by creating healthy boundaries. “Healthy” is key here; this isn’t about being a jerk to everyone! Quite the contrary, it involves being honest about what we can and cannot reasonably take on.
The last section is about improving and troubleshooting this process, and constitutes a lot of the greatest value of the book, since this is where people tend to err the most.
Bottom line: this book is informative, clear, and helpful. And far from disappointing everyone with “no”, we can learn to really de-stress our relationships with others—and ourselves.
Share This Post
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Share This Post
Related Posts
Do Try This At Home: The 12-Week Brain Fitness Program
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
12 Weeks To Measurably Boost Your Brain
This is Dr. Majid Fotuhi. From humble beginnings (being smuggled out of Iran in 1980 to avoid death in the war), he went on (after teaching himself English, French, and German, hedging his bets as he didn’t know for sure where life would lead him) to get his MD from Harvard Medical School and his PhD in neuroscience from Johns Hopkins University. Since then, he’s had a decades-long illustrious career in neurology and neurophysiology.
What does he want us to know?
The Brain Fitness Program
This is not, by the way, something he’s selling. Rather, it was a landmark 12-week study in which 127 people aged 60–80, of which 63% female, all with a diagnosis of mild cognitive impairment, underwent an interventional trial—in other words, a 12-week brain fitness course.
After it, 84% of the participants showed statistically significant improvements in cognitive function.
Not only that, but of those who underwent MRI testing before and after (not possible for everyone due to practical limitations), 71% showed either no further deterioration of the hippocampus, or actual growth above the baseline volume of the hippocampus (that’s good, and it means functionally the memory center of the brain has been rejuvenated).
You can read a little more about the study here:
As for what the program consisted of, and what Dr. Fotuhi thus recommends for everyone…
Cognitive stimulation
This is critical, so we’re going to spend most time on this one—the others we can give just a quick note and a pointer.
In the study this came in several forms and had the benefit of neurofeedback technology, but he says we can replicate most of the effects by simply doing something cognitively stimulating. Whatever challenges your brain is good, but for maximum effect, it should involve the language faculties of the brain, since these are what tend to get hit most by age-related cognitive decline, and are also what tends to have the biggest impact on life when lost.
If you lose your keys, that’s an inconvenience, but if you can’t communicate what is distressing you, or understand what someone is explaining to you, that’s many times worse—and that kind of thing is a common reality for many people with dementia.
To keep the lights brightly lit in that part of the brain: language-learning is good, at whatever level suits you personally. In other words: there’s a difference between entry-level Duolingo Spanish, and critically analysing Rumi’s poetry in the original Persian, so go with whatever is challenging and/but accessible for you—just like you wouldn’t go to the gym for the first time and try to deadlift 500lbs, but you also probably wouldn’t do curls with the same 1lb weights every day for 10 years.
In other words: progressive overloading is key, for the brain as well as for muscles. Start easy, but if you’re breezing through everything, it’s time to step it up.
If for some reason you’re really set against the idea of learning another language, though, check out:
Reading As A Cognitive Exercise ← there are specific tips here for ensuring your reading is (and remains) cognitively beneficial
Mediterranean diet
Shocking nobody, this is once again recommended. You might like to check out the brain-healthy “MIND” tweak to it, here:
Four Ways To Upgrade The Mediterranean Diet ← it’s the fourth one
Omega-3 supplementation
Nothing complicated here. The brain needs a healthy balance of these fatty acids to function properly, and most people have an incorrect balance (too little omega-3 for the omega-6 present):
What Omega-3 Fatty Acids Really Do For Us ← scroll to “against cognitive decline”
Increasing fitness
There’s a good rule of thumb: what’s healthy for your heart, is healthy for your brain. This is because, like every other organ in your body, the brain does not function well without good circulation bringing plenty of oxygen and nutrients, which means good cardiovascular health is necessary. The brain is extra sensitive to this because it’s a demanding organ in terms of how much stuff it needs delivering via blood, and also because of the (necessary; we’d die quickly and horribly without it) impediment of the blood-brain barrier, and the possibility of beta-amyloid plaques and similar woes (they will build up if circulation isn’t good).
How To Reduce Your Alzheimer’s Risk ← number two on the list here
Practising mindfulness medication
This is also straightforward, but not to be underestimated or skipped over:
No-Frills, Evidence-Based Mindfulness
Want to step it up? Check out:
Meditation Games That You’ll Actually Enjoy
Lastly…
Dr. Fotuhi wants us to consider looking after our brain the same way we look after our teeth. No, he doesn’t want us to brush our brain, but he does want us to take small measurable actions multiple times per day, every day.
You can’t just spend the day doing nothing but brushing your teeth for the entirety of January the 1st and then expect them to be healthy for the rest of the year; it doesn’t work like that—and it doesn’t work like that for the brain, either.
So, make the habits, and keep them going
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Test For Whether You Will Be Able To Achieve The Splits
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Some people stretch for years without being able to do the splits; others do it easily after a short while. Are there people for whom it is impossible, and is there a way to know in advance whether our efforts will be fruitful? Liv (of “LivInLeggings” fame) has the answer:
One side of the story
There are several factors that affect whether we can do the splits, including:
- arrangement of the joint itself
- length of tendons and muscles
- “stretchiness” of tendons and muscles
The latter two things, we can readily train to improve. Yes, even the basic length can be changed over time, because the body adapts.
The former thing, however (arrangement of the joint itself) is near-impossible, because skeletal changes happen more slowly than any other changes in the body. In a battle of muscle vs bone, muscle will always win eventually, and even the bone itself can be rebuilt (as the body fixes itself, or in the case of some diseases, messes itself up). However, changing the arrangement of your joint itself is far beyond the auspices of “do some stretches each day”. So, for practical purposes, without making it the single most important thing in your life, it’s impossible.
How do we know if the arrangement of our hip joint will accommodate the splits? We can test it, one side at a time. Liv uses the middle splits, also called the side splits or box splits, as an example, but the same science and the same method goes for the front splits.
Stand next to a stable elevated-to-hip-height surface. You want to be able to raise your near-side leg laterally, and rest it on the surface, such that your raised leg is now perfectly perpendicular to your body.
There’s a catch: not only do you need to still be stood straight while your leg is elevated 90° to the side, but also, your hips still need to remain parallel to the floor—not tilted up to one side.
If you can do this (on both sides, even if not both simultaneously right now), then your hip joint itself definitely has the range of motion to allow you to do the side splits; you just need to work up to it. Technically, you could do it right now: if you can do this on both sides, then since there’s no tendon or similar running between your two legs to make it impossible to do both at once, you could do that. But, without training, your nerves will stop you; it’s an in-built self-defense mechanism that’s just firing unnecessarily in this case, and needs training to get past.
If you can’t do this, then there are two main possibilities:
- Your joint is not arranged in a way that facilitates this range of motion, and you will not achieve this without devoting your life to it and still taking a very long time.
- Your tendons and muscles are simply too tight at the moment to allow you even the half-split, so you are getting a false negative.
This means that, despite the slightly clickbaity title on YouTube, this test cannot actually confirm that you can never do the middle splits; it can only confirm that you can. In other words, this test gives two possible results:
- “Yes, you can do it!”
- “We don’t know whether you can do it”
For more on the anatomy of this plus a visual demonstration of the test, enjoy:
Click Here If The Embedded Video Doesn’t Load Automatically!
Want to learn more?
You might also like to read:
Stretching Scientifically – by Thomas Kurz ← this is our review of the book she’s working from in this video; this book has this test!
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
The Gut-Healthiest Yogurt
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Not only is this yogurt, so it’s winning from the start with its probiotic goodness, but also it’s full of several kinds of fiber, and gut-healthy polyphenols too. Plus, it’s delicious. The perfect breakfast, but don’t let us stop you from enjoying it at any time of day!
You will need
- 1 cup yogurt with minimal additives. Live Greek yogurt is a top-tier choice, and plant-based varieties are fine too (just watch out, again, for needless additives)
- 7 dried figs, roughly chopped
- 6 fresh figs, thinly sliced
- 5 oz chopped pitted dates
- 4 tbsp mixed seeds (pumpkin, sunflower, and chia are a great combination)
Method
(we suggest you read everything at least once before doing anything)
1) Soak the dried figs, the dates, and half the seeds in hot water for at least 5 minutes. Drain (be careful not to lose the chia seeds) and put in a blender with ¼ cup cold water.
2) Blend the ingredients from the last step into a purée (you can add a little more cold water if it needs it).
3) Mix this purée into the yogurt in a bowl, and add in the remaining seeds, mixing them in thoroughly.
4) Top with the sliced figs, and serve (or refrigerate, up to a few days, until needed).
Enjoy!
Want to learn more?
For those interested in some of the science of what we have going on today:
- Making Friends With Your Gut (You Can Thank Us Later)
- Dates vs Figs – Which is Healthier?
- The Tiniest Seeds With The Most Value
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: