Can You Shrink A Waist In Seven Days?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
We don’t usually do this sort of video, but it seems timely before the new year. The exercises shown here are very good, and the small dietary tweak is what makes it work:
The method
Firstly, the small dietary tweak is: abstaining from foods that cause bloating, such as flour and dairy. She does say “брожение” (fermentation), but we don’t really use the word that way in English. On which note: she is Ukrainian and speaking Russian (context: many Ukrainians grew up speaking both languages), so you will need the subtitles on if you don’t understand Russian, but a) it’s worth it b) the subtitles have been put in manually so they’re a respectable translation.
Secondly, spoiler, she loses about 2 inches.
The exercises are:
- Pelvic swing-thrusts: sit, supporting yourself on your hands with your butt off the floor; raise your pelvis up to a table position, do 30 repetitions.
- Leg raises in high plank: perform 20 lifts per leg, each to its side.
- Leg raises (lying on back): do 20 repetitions.
- V-crunches: perform 30 repetitions.
- V-twists: lean on hands and do 25 repetitions.
These exercises (all five done daily for the 7 days) are great for core strength, and core muscletone is what keeps your innards in place, rather than letting them drop down (and out).
Thus, there’s only a small amount of actual fat loss going on here (if any), but it slims the waistline by improving muscletone and simultaneously decreasing bloating, which are both good changes.
For more on all of these plus visual demonstrations, enjoy:
Click Here If The Embedded Video Doesn’t Load Automatically!
Want to learn more?
You might also like to read:
Visceral Belly Fat & How To Lose It
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Share This Post
-
Dealing With Waking Up In The Night
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
It’s Q&A Day at 10almonds!
Have a question or a request? You can always hit “reply” to any of our emails, or use the feedback widget at the bottom!
In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!
As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!
So, no question/request too big or small
❝I’m now in my sixties and find that I invariably wake up at least once during the night. Is this normal? Even if it is, I would still like, once in a while, to sleep right through like a teenager. How might this be achieved, without pills?❞
Most people wake up briefly between sleep cycles, and forget doing so. But waking up for more than a brief moment is indeed best avoided. In men of your age, if you’re waking to pee (especially if it’s then not actually that easy to pee), it can be a sign of an enlarged prostate. Which is again a) normal b) not optimal.
By “without pills” we’ll assume you mean “without sleeping pills”. There are options to treat an enlarged prostate, including well-established supplements. We did a main feature on this:
Prostate Health: What You Should Know
If the cause of waking up is something else, then again this is common for everyone as we get older, and again it’s not optimal. But since there are so many possible causes (and thus solutions), it’s more than we can cover in less than a main feature, so we’ll have to revisit this later.
Meanwhile, take care!
Share This Post
-
Pumpkin Protein Crackers
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Ten of these (give or take what size you make them) will give you the 20g protein that most people’s body’s can use at a time. Five of these plus some of one of the dips we list at the bottom will also do it:
You will need
- 1 cup chickpea flour (also called gram flour or garbanzo bean flour)
- 2 tbsp pumpkin seeds
- 1 tbsp chia seeds
- 1 tsp baking powder
- ¼ tsp MSG or ½ tsp low-sodium salt
- 2 tbsp extra virgin olive oil
Method
(we suggest you read everything at least once before doing anything)
1) Preheat the oven to 350℉ / 180℃.
2) Combine the dry ingredients in a mixing bowl, and mix thoroughly.
3) Add the oil, and mix thoroughly.
4) Add water, 1 tbsp at a time, mixing thoroughly until the mixture comes together and you have a dough ball. You’ll probably need 3–4 tbsp in total, but do add them one at a time.
5) Roll out the dough as thinly and evenly as you can between two sheets of baking paper. Remove the top layer of the paper, and slice the dough into squares or triangles. You could use a cookie-cutter to make other shapes if you like, but then you’ll need to repeat the rolling to use up the offcuts. So we recommend squares or triangles at least for your first go.
6) Bake them in the oven for 12–15 minutes or until golden and crispy. Enjoy immediately or keep in an airtight container.
Enjoy!
Want to learn more?
For those interested in some things to go with what we have going on today:
- Muhammara ← this is a very nutritionally-dense dip (not to mention tasty; seriously, check out these flavors)
- Hero Homemade Hummus ← a classic
- Plant-Based Healthy Cream Cheese ← also a very respectable option
Take care!
Share This Post
Related Posts
-
Self-Compassion In A Relationship (Positives & Pitfalls)
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Practise Self-Compassion In Your Relationship (But Watch Out!)
Let’s make clear up-front: this is not about “…but not too much”.
With that in mind…
Now let’s set the scene: you, a happily-partnered person, have inadvertently erred and upset your partner. They may or may not have already forgiven you, but you are still angry at yourself.
Likely next steps include all or any of:
- continuing to apologise and try to explain
- self-deprecatory diatribes
- self-flagellation, probably not literally but in the sense of “I don’t deserve…” and acting on that feeling
- self-removal, because you don’t want to further inflict your bad self on your partner
As you might guess, these are quite varied in their degree of healthiness:
- apologising is good, as even is explaining, but once it’s done, it’s done; let it go
- self-deprecation is pretty much never useful, let alone healthy
- self-flagellation likewise; it is not only inherently self-destructive, but will likely create an additional problem for your partner too
- self-removal can be good or bad depending on the manner of that removal: there’s a difference between just going cold and distant on your partner, and saying “I’m sorry; this is my fault not yours, I don’t want to take it out on you, so please give me half an hour by myself to regain my composure, and I will come back with love then if that’s ok with you”
About that last: mentioning the specific timeframe e.g. “half an hour” is critical, by the way—don’t leave your partner hanging! And then do also follow through on that; come back with love after the half-hour elapses. We suggest mindfulness meditation in the interim (here’s our guide to how), if you’re not sure what to do to get you there.
To Err Is Human; To Forgive, Healthy (Here’s How To Do It) ← this goes for when the forgiveness in question is for yourself, too—and we do write about that there (and how)!
This is important, by the way; not forgiving yourself can cause more serious issues down the line:
If, by the way, you’re hand-wringing over “but was my apology good enough really, or should I…” then here is how to do it. Basically, do this, and then draw a line under it and consider it done:
The Apology Checklist ← you’ll want to keep a copy of this, perhaps in the notes app on your phone, or a screenshot if you prefer
(the checklist is at the bottom of that page)
The catch
It’s you, you’re the catch 👈👈😎
Ok, that being said, there is actually a catch in the less cheery sense of the word, and it is:
“It is important to be compassionate about one’s occasional failings in a relationship” does not mean “It is healthy to be neglectful of one’s partner’s emotional needs; that’s self-care, looking after #1; let them take care of themself too”
…because that’s simply not being a couple at all.
Think about it this way: the famous airline advice,
“Put on your own oxygen mask before helping others with theirs”
…does not mean “Put on your own oxygen mask and then watch those kids suffocate; it’s everyone for themself”
So, the same goes in relationships too. And, as ever, we have science for this. There was a recent (2024) study, involving hundreds of heterosexual couples aged 18–73, which looked at two things, each measured with a scaled questionnaire:
- Subjective levels of self-compassion
- Subjective levels of relationship satisfaction
For example, questions included asking participants to rate, from 1–5 depending on how much they felt the statements described them, e.g:
In my relationship with my partner, I:
- treat myself kindly when I experience sorrow and suffering.
- accept my faults and weaknesses.
- try to see my mistakes as part of human nature.
- see difficulties as part of every relationship that everyone goes through once.
- try to get a balanced view of the situation when something unpleasant happens.
- try to keep my feelings in balance when something upsets me.
Note: that’s not multiple choice! It’s asking participants to rate each response as applicable or not to them, on a scale of 1–5.
And…
❝Women’s self-compassion was also positively linked with men’s total relationship satisfaction. Thus, men seem to experience overall satisfaction with the relationship when their female partner is self-kind and self-caring in difficult situations.
Unexpectedly, however, we found that men’s relationship-specific self-compassion was negatively associated with women’s fulfillment.
Baker and McNulty (2011) reported that, only for men, a Self-Compassion x Conscientiousness interaction explained whether the positive effects of self-compassion on the relationship emerged, but such an interaction was not found for women.
Highly self-compassionate men who were low in conscientiousness were less motivated than others to remedy interpersonal mistakes in their romantic relationships, and this tendency was in turn related to lower relationship satisfaction❞
~ Dr. Astrid Schütz et al. (2024)
And if you’d like to read the cited older paper from 2011, here it is:
Read in full: Self-compassion and relationship maintenance: the moderating roles of conscientiousness and gender
The take-away here is not: “men should not practice self-compassion”
(rather, they absolutely should)
The take-away is: we must each take responsibility for managing our own mood as best we are able; practice self-forgiveness where applicable and forgive our partner where applicable (and communicate that!)…. And then go consciously back to the mutual care on which the relationship is hopefully founded.
Which doesn’t just mean love-bombing, by the way, it also means listening:
The Problem With Active Listening (And How To Do Better)
To close… We say this often, but we mean it every time: take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
We created a VR tool to test brain function. It could one day help diagnose dementia
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
If you or a loved one have noticed changes in your memory or thinking as you’ve grown older, this could reflect typical changes that occur with ageing. In some cases though, it might suggest something more, such as the onset of dementia.
The best thing to do if you have concerns is to make an appointment with your GP, who will probably run some tests. Assessment is important because if there is something more going on, early diagnosis can enable prompt access to the right interventions, supports and care.
But current methods of dementia screening have limitations, and testing can be daunting for patients.
Our research suggests virtual reality (VR) could be a useful cognitive screening tool, and mitigate some of the challenges associated with current testing methods, opening up the possibility it may one day play a role in dementia diagnosis.
Where current testing is falling short
If someone is worried about their memory and thinking, their GP might ask them to complete a series of quick tasks that check things like the ability to follow simple instructions, basic arithmetic, memory and orientation.
These sorts of screening tools are really good at confirming cognitive problems that may already be very apparent. But commonly used screening tests are not always so good at detecting early and more subtle difficulties with memory and thinking, meaning such changes could be missed until they get worse.
A clinical neuropsychological assessment is better equipped to detect early changes. This involves a comprehensive review of a patient’s personal and medical history, and detailed assessment of cognitive functions, including attention, language, memory, executive functioning, mood factors and more. However, this can be costly and the testing can take several hours.
Testing is also somewhat removed from everyday experience, not directly tapping into activities of daily living.
Enter virtual reality
VR technology uses computer-generated environments to create immersive experiences that feel like real life. While VR is often used for entertainment, it has increasingly found applications in health care, including in rehabilitation and falls prevention.
Using VR for cognitive screening is still a new area. VR-based cognitive tests generally create a scenario such as shopping at a supermarket or driving around a city to ascertain how a person would perform in these situations.
Notably, they engage various senses and cognitive processes such as sight, sound and spatial awareness in immersive ways. All this may reveal subtle impairments which can be missed by standard methods.
VR assessments are also often more engaging and enjoyable, potentially reducing anxiety for those who may feel uneasy in traditional testing environments, and improving compliance compared to standard assessments.
Most studies of VR-based cognitive tests have explored their capacity to pick up impairments in spatial memory (the ability to remember where something is located and how to get there), and the results have been promising.
Given VR’s potential for assisting with diagnosis of cognitive impairment and dementia remains largely untapped, our team developed an online computerised game (referred to as semi-immersive VR) to see how well a person can remember, recall and complete everyday tasks. In our VR game, which lasts about 20 minutes, the user role plays a waiter in a cafe and receives a score on their performance.
To assess its potential, we enlisted more than 140 people to play the game and provide feedback. The results of this research are published across three recent papers.
Testing our VR tool
In our most recently published study, we wanted to verify the accuracy and sensitivity of our VR game to assess cognitive abilities.
We compared our test to an existing screening tool (called the TICS-M) in more than 130 adults. We found our VR task was able to capture meaningful aspects of cognitive function, including recalling food items and spatial memory.
We also found younger adults performed better in the game than older adults, which echoes the pattern commonly seen in regular memory tests.
In a separate study, we followed ten adults aged over 65 while they completed the game, and interviewed them afterwards. We wanted to understand how this group – who the tool would target – perceived the task.
These seniors told us they found the game user-friendly and believed it was a promising tool for screening memory. They described the game as engaging and immersive, expressing enthusiasm to continue playing. They didn’t find the task created anxiety.
For a third study, we spoke to seven health-care professionals about the tool. Overall they gave positive feedback, and noted its dynamic approach to age-old diagnostic challenges.
However, they did flag some concerns and potential barriers to implementing this sort of tool. These included resource constraints in clinical practice (such as time and space to carry out the assessment) and whether it would be accessible for people with limited technological skills. There was also some scepticism about whether the tool would be an accurate method to assist with dementia diagnosis.
While our initial research suggests this tool could be a promising way to assess cognitive performance, this is not the same as diagnosing dementia. To improve the test’s ability to accurately detect those who likely have dementia, we’ll need to make it more specific for that purpose, and carry out further research to validate its effectiveness.
We’ll be conducting more testing of the game soon. Anyone interested in giving it a go to help with our research can register on our team’s website.
Joyce Siette, Research Theme Fellow in Health and Wellbeing, Western Sydney University and Paul Strutt, Senior Lecturer in Psychology, Western Sydney University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Running: Getting Started – by Jeff Galloway
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Superficially, running is surely one of the easiest sports to get into, for most people. You put one foot in front of the other, repeat, and pick up the pace.
However, many people do not succeed. They head out of the door (perhaps on January the first), push themselves a little, experience runner’s high, think “this is great”, and the next day wake up with some minor aches and no motivation. This book is here to help you bypass that stage.
Jeff Galloway has quite a series of books, but the others seem derivative of this one. So, what makes this one special?
It’s quite comprehensive; it covers (as the title promises) getting started, setting yourself up for success, finding what level your ability is at safely rather than guessing and overdoing it, and building up from there.
He also talks about what kit you’ll want; this isn’t just about shoes, but even “what to wear when the weather’s not good” and so forth; he additionally shares advice about diet, exercise on non-running days, body maintenance (stretching and strengthening), troubleshooting aches and pains, and running well into one’s later years.
Bottom line: if you’d like to take up running but it seems intimidating (perhaps for reasons you can’t quite pin down), this book will take care of all those things, and indeed get you “up and running”.
Click here to check out Running: Getting Started, and get started!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: