Which Magnesium? (And: When?)
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
It’s Q&A Day at 10almonds!
Have a question or a request? We love to hear from you!
In cases where we’ve already covered something, we might link to what we wrote before, but will always be happy to revisit any of our topics again in the future too—there’s always more to say!
As ever: if the question/request can be answered briefly, we’ll do it here in our Q&A Thursday edition. If not, we’ll make a main feature of it shortly afterwards!
So, no question/request too big or small
❝Good morning! I have been waiting for this day to ask: the magnesium in my calcium supplement is neither of the two versions you mentioned in a recent email newsletter. Is this a good type of magnesium and is it efficiently bioavailable in this composition? I also take magnesium that says it is elemental (oxide, gluconate, and lactate). Are these absorbable and useful in these sources? I am not interested in taking things if they aren’t helping me or making me healthier. Thank you for your wonderful, informative newsletter. It’s so nice to get non-biased information❞
Thank you for the kind words! We certainly do our best.
For reference: the attached image showed a supplement containing “Magnesium (as Magnesium Oxide & AlgaeCal® l.superpositum)”
Also for reference: the two versions we compared head-to-head were these very good options:
Magnesium Glycinate vs Magnesium Citrate – Which is Healthier?
Let’s first borrow from the above, where we mentioned: magnesium oxide is probably the most widely-sold magnesium supplement because it’s cheapest to make. It also has woeful bioavailability, to the point that there seems to be negligible benefit to taking it. So we don’t recommend that.
As for magnesium gluconate and magnesium lactate:
- Magnesium lactate has very good bioavailability and in cases where people have problems with other types (e.g. gastrointestinal side effects), this will probably not trigger those.
- Magnesium gluconate has excellent bioavailability, probably coming second only to magnesium glycinate.
The “AlgaeCal® l.superpositum” supplement is a little opaque (and we did ntoice they didn’t specify what percentage of the magnesium is magnesium oxide, and what percentage is from the algae, meaning it could be a 99:1 ratio split, just so that they can claim it’s in there), but we can say Lithothamnion superpositum is indeed an algae and magnesium from green things is usually good.
Except…
It’s generally best not to take magnesium and calcium together (as that supplement contains). While they do work synergistically once absorbed, they compete for absorption first so it’s best to take them separately. Because of magnesium’s sleep-improving qualities, many people take calcium in the morning, and magnesium in the evening, for this reason.
Some previous articles you might enjoy meanwhile:
- Pinpointing The Usefulness Of Acupuncture
- Science-Based Alternative Pain Relief
- Peripheral Neuropathy: How To Avoid It, Manage It, Treat It
- What Does Lion’s Mane Actually Do, Anyway?
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Two Awesome Hours – by Dr. Josh Davis
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
The brain is an amazing and powerful organ, with theoretically unlimited potential in some respects. So why doesn’t it feel that way a lot of the time?
The truth is that not only are we often tired, dehydrated, or facing other obvious physiological challenges to peak brain health, but also… We’re simply not making the best use of it!
What Dr. Davis does is outline for us how we can create the conditions for “two awesome hours” of effective mental performance by:
- Recognizing when to most effectively flip the switch on our automatic thinking
- Scheduling tasks based on their “processing demand” and recovery time
- Learning how to direct attention, rather than avoid distractions
- Feeding and moving our bodies in ways that prep us for success
- Identifying what matters in our environment to be at the top of our mental game
Why only two hours? Why not four, or eight, or more?
Well, our brains need recovery time too, so we can’t be “always on” and operating and peak efficiency. But, what we can do is optimize a couple of hours for absolute peak efficiency, and then enjoy the rest of time with lower cognitive-load activities.
Bottom line: if the idea of what you could accomplish if you could just be guaranteed two schedulable hours (your preference when!) of peak cognitive performance per day, then this is a great book for you.
Share This Post
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Share This Post
The Brain’s Way of Healing – by Dr. Norman Doidge
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
First, what this book isn’t: any sort of wishy-washy “think yourself better” fluff, and nor is it a “tapping into your Universal Divine Essence” thing.
In contrast, Dr. Norman Doidge sticks with science, and the only “vibrational frequencies” involved are the sort that come from an MRI machine or similar.
The author makes bold claims of the potential for leveraging neuroplasticity to heal many chronic diseases. All of them are neurological in whole or in part, ranging from chronic pain to Parkinson’s.
How well are these claims backed up, you ask?
The book makes heavy use of case studies. In science, case studies rarely prove anything, so much as indicate a potential proof of principle. Clinical trials are what’s needed to become more certain, and for Dr. Doidge’s claims, these are so far sadly lacking, or as yet inconclusive.
Where the book’s strengths lie is in describing exactly what is done, and how, to effect each recovery. Specific exercises to do, and explanations of the mechanism of action. To that end, it makes them very repeatable for any would-be “citizen scientist” who wishes to try (in the cases that they don’t require special equipment).
Bottom line: this book would be more reassuring if its putative techniques had enjoyed more clinical studies… But in the meantime, it’s a fair collection of promising therapeutic approaches for a number of neurological disorders.
Click here to check out The Brain’s Way of Healing, and learn more!
Share This Post
Related Posts
Life Extension Multivitamins vs Centrum Multivitamins – Which is Healthier
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing Life Extension Multivitamins to Centrum Multivitamins, we picked the Life Extension.
Why?
The clue here was on the label: “two per day”. It’s not so that they can sell extra filler! It’s because they couldn’t fit it all into one.
While the Centrum Multivitamins is a (respectably) run-of-the-mill multivitamin (and multimineral) containing reasonable quantities of most vitamins and minerals that people supplement, the Life Extension product has the same plus more:
- More of the vitamins and minerals; i.e. more of them are hitting 100%+ of the RDA
- More beneficial supplements, including:
- Inositol, Alpha lipoic acid, Bio-Quercetin phytosome, phosphatidylcholine complex, Marigold extract, Apigenin, Lycopene, and more that we won’t list here because it starts to get complicated if we do.
We’ll have to write some main features on some of those that we haven’t written about before, but suffice it to say, they’re all good things.
Main take-away for today: sometimes more is better; it just necessitates then reading the label to check.
Want to get some Life Extension Multivitamins (and/or perhaps just read the label on the back)? Here they are on Amazon
PS: it bears mentioning, since we are sometimes running brands against each other head-to-head in this section: nothing you see here is an advertisement/sponsor unless it’s clearly marked as such. We haven’t, for example, been paid by Life Extension or any agent of theirs, to write the above. It’s just our own research and conclusion.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Triphala Against Cognitive Decline, Obesity, & More
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Triphala is not just one thing, it is a combination of three plants being used together as one medicine:
- Alma (Emblica officinalis)
- Bibhitaki (Terminalia bellirica)
- Haritaki (Terminalia chebula)
…generally prepared in a 1:1:1 ratio.
This is a traditional preparation from ayurveda, and has enjoyed thousands of years of use in India. In and of itself, ayurveda is classified as a pseudoscience (literally: it doesn’t adhere to scientific method; instead, it merely makes suppositions that seem reasonable and acts on them), but that doesn’t mean it doesn’t still have a lot to offer—because, simply put, a lot of ayurvedic medicines work (and a lot don’t).
So, ayurveda’s unintended job has often been finding things for modern science to test.
For more on ayurveda: Ayurveda’s Contributions To Science (Without Being Itself Rooted in Scientific Method)
So, under the scrutiny of modern science, how does triphala stand up?
Against cognitive decline
It has most recently come to attention because one of its ingredients, the T. chebula, has been highlighted as effective against mild cognitive impairment (MCI) by several mechanisms of action, via its…
❝171 chemical constituents and 11 active constituents targeting MCI, such as flavonoids, which can alleviate MCI, primarily through its antioxidative, anti-inflammatory, and neuroprotective properties. T. Chebula shows potential as a natural medicine for the treatment and prevention of MCI.❞
Read in full: The potential of Terminalia chebula in alleviating mild cognitive impairment: a review
The review was quite groundbreaking, to the extent that it got a pop-science article written about it:
We’d like to talk about those 11 active constituents in particular, but we don’t have room for all of them, so we’ll mention that one of them is quercetin, which we’ve written about before:
Fight Inflammation & Protect Your Brain, With Quercetin
For gut health
It’s also been found to improve gut health by increasing transit time, that is to say, how slowly things move through your gut. Counterintuitively, this reduces constipation (without being a laxative), by giving your gut more time to absorb everything it needs to, and more time for your gut bacteria to break down the things we can’t otherwise digest:
For weight management
Triphala can also aid with weight reduction, particularly in the belly area, by modulating our insulin responses to improve insulin sensitivity:
Want to try some?
We don’t sell it, but here for your convenience is an example product on Amazon 😎
Enjoy!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
Elon Musk says ketamine can get you out of a ‘negative frame of mind’. What does the research say?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
X owner Elon Musk recently described using small amounts of ketamine “once every other week” to manage the “chemical tides” that cause his depression. He says it’s helpful to get out of a “negative frame of mind”.
This has caused a range of reactions in the media, including on X (formerly Twitter), from strong support for Musk’s choice of treatment, to allegations he has a drug problem.
But what exactly is ketamine? And what is its role in the treatment of depression?
It was first used as an anaesthetic
Ketamine is a dissociative anaesthetic used in surgery and to relieve pain.
At certain doses, people are awake but are disconnected from their bodies. This makes it useful for paramedics, for example, who can continue to talk to injured patients while the drug blocks pain but without affecting the person’s breathing or blood flow.
Ketamine is also used to sedate animals in veterinary practice.
Ketamine is a mixture of two molecules, usually referred to a S-Ketamine and R-Ketamine.
S-Ketamine, or esketamine, is stronger than R-Ketamine and was approved in 2019 in the United States under the drug name Spravato for serious and long-term depression that has not responded to at least two other types of treatments.
Ketamine is thought to change chemicals in the brain that affect mood.
While the exact way ketamine works on the brain is not known, scientists think it changes the amount of the neurotransmitter glutamate and therefore changes symptoms of depression.How was it developed?
Ketamine was first synthesised by chemists at the Parke Davis pharmaceutical company in Michigan in the United States as an anaesthetic. It was tested on a group of prisoners at Jackson Prison in Michigan in 1964 and found to be fast acting with few side effects.
The US Food and Drug Administration approved ketamine as a general anaesthetic in 1970. It is now on the World Health Organization’s core list of essential medicines for health systems worldwide as an anaesthetic drug.
In 1994, following patient reports of improved depression symptoms after surgery where ketamine was used as the anaesthetic, researchers began studying the effects of low doses of ketamine on depression.
The first clinical trial results were published in 2000. In the trial, seven people were given either intravenous ketamine or a salt solution over two days. Like the earlier case studies, ketamine was found to reduce symptoms of depression quickly, often within hours and the effects lasted up to seven days.
Over the past 20 years, researchers have studied the effects of ketamine on treatment resistant depression, bipolar disorder, post-traumatic sress disorder obsessive-compulsive disorder, eating disorders and for reducing substance use, with generally positive results.
One study in a community clinic providing ketamine intravenous therapy for depression and anxiety found the majority of patients reported improved depression symptoms eight weeks after starting regular treatment.
While this might sound like a lot of research, it’s not. A recent review of randomised controlled trials conducted up to April 2023 looking at the effects of ketamine for treating depression found only 49 studies involving a total of 3,299 patients worldwide. In comparison, in 2021 alone, there were 1,489 studies being conducted on cancer drugs.
Is ketamine prescribed in Australia?
Even though the research results on ketamine’s effectiveness are encouraging, scientists still don’t really know how it works. That’s why it’s not readily available from GPs in Australia as a standard depression treatment. Instead, ketamine is mostly used in specialised clinics and research centres.
However, the clinical use of ketamine is increasing. Spravato nasal spray was approved by the Australian Therapuetic Goods Administration (TGA) in 2021. It must be administered under the direct supervision of a health-care professional, usually a psychiatrist.
Spravato dosage and frequency varies for each person. People usually start with three to six doses over several weeks to see how it works, moving to fortnightly treatment as a maintenance dose. The nasal spray costs between A$600 and $900 per dose, which will significantly limit many people’s access to the drug.
Ketamine can be prescribed “off-label” by GPs in Australia who can prescribe schedule 8 drugs. This means it is up to the GP to assess the person and their medication needs. But experts in the drug recommend caution because of the lack of research into negative side-effects and longer-term effects.
What about its illicit use?
Concern about use and misuse of ketamine is heightened by highly publicised deaths connected to the drug.
Ketamine has been used as a recreational drug since the 1970s. People report it makes them feel euphoric, trance-like, floating and dreamy. However, the amounts used recreationally are typically higher than those used to treat depression.
Information about deaths due to ketamine is limited. Those that are reported are due to accidents or ketamine combined with other drugs. No deaths have been reported in treatment settings.
Reducing stigma
Depression is the third leading cause of disability worldwide and effective treatments are needed.
Seeking medical advice about treatment for depression is wiser than taking Musk’s advice on which drugs to use.
However, Musk’s public discussion of his mental health challenges and experiences of treatment has the potential to reduce stigma around depression and help-seeking for mental health conditions.
Clarification: this article previously referred to a systematic review looking at oral ketamine to treat depression. The article has been updated to instead cite a review that encompasses other routes of administration as well, such as intravenous and intranasal ketamine.
Julaine Allan, Associate Professor, Mental Health and Addiction, Rural Health Research Institute, Charles Sturt University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: