Five Advance Warnings of Multiple Sclerosis

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Five Advance Warnings of Multiple Sclerosis

First things first, a quick check-in with regard to how much you know about multiple sclerosis (MS):

  • Do you know what causes it?
  • Do you know how it happens?
  • Do you know how it can be fixed?

If your answer to the above questions is “no”, then take solace in the fact that modern science doesn’t know either.

What we do know is that it’s an autoimmune condition, and that it results in the degradation of myelin, the “insulator” of nerves, in the central nervous system.

  • How exactly this is brought about remains unclear, though there are several leading hypotheses including autoimmune attack of myelin itself, or disruption to the production of myelin.
  • Treatments look to reduce/mitigate inflammation, and/or treat other symptoms (which are many and various) on an as-needed basis.

If you’re wondering about the prognosis after diagnosis, the scientific consensus on that is also “we don’t know”:

Read: Personalized medicine in multiple sclerosis: hope or reality?

this paper, like every other one we considered putting in that spot, concludes with basically begging for research to be done to identify biomarkers in a useful fashion that could help classify many distinct forms of MS, rather than the current “you have MS, but who knows what that will mean for you personally because it’s so varied” approach.

The Five Advance Warning Signs

Something we do know! First, we’ll quote directly the researchers’ conclusion:

❝We identified 5 health conditions associated with subsequent MS diagnosis, which may be considered not only prodromal but also early-stage symptoms.

However, these health conditions overlap with prodrome of two other autoimmune diseases, hence they lack specificity to MS.❞

So, these things are a warning, five alarm bells, but not necessarily diagnostic criteria.

Without further ado, the five things are:

  1. depression
  2. sexual disorders
  3. constipation
  4. cystitis
  5. urinary tract infections

❝This association was sufficiently robust at the statistical level for us to state that these are early clinical warning signs, probably related to damage to the nervous system, in patients who will later be diagnosed with multiple sclerosis.

The overrepresentation of these symptoms persisted and even increased over the five years after diagnosis.❞

~ Dr. Céline Louapre

Read the paper for yourself:

Association Between Diseases and Symptoms Diagnosed in Primary Care and the Subsequent Specific Risk of Multiple Sclerosis

Hot off the press! Published only yesterday!

Want to know more about MS?

Here’s a very comprehensive guide:

National clinical guideline for diagnosis and management of multiple sclerosis

Take care!

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • Prostate Health: What You Should Know
  • Scarcity Brain – by Michael Easter
    Learn why we crave more and how to break free from the scarcity loop. Discover how to use what you have and be less stressed about needing more.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • How do science journalists decide whether a psychology study is worth covering?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

    Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

    But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

    Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

    The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

    University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

    But there’s nuance to the findings, the authors note.

    “I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

    Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

    Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

    “Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

    “This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

    More on the study’s findings

    The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

    “As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

    Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

    The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

    Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

    Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

    “Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

    Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

    For instance, one of the vignettes reads:

    “Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

    In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

    Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

    Considering statistical significance

    When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

    Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

    “Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

    Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

    In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

    • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
    • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
    • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
    • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

    Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

    What other research shows about science journalists

    A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

    A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

    More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

    Advice for journalists

    We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

    1. Examine the study before reporting it.

    Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

    Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

    How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

    Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

    “Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

    Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

    Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

    Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

    2. Zoom in on data.

    Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

    What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

    But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

    How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

    Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

    3. Talk to scientists not involved in the study.

    If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

    Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

    4. Remember that a single study is simply one piece of a growing body of evidence.

    “I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

    Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

    Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

    “We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

    5. Remind readers that science is always changing.

    “Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

    Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

    Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

    The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

    Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

    Additional reading

    Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
    Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

    The Problem with Psychological Research in the Media
    Steven Stosny. Psychology Today, September 2022.

    Critically Evaluating Claims
    Megha Satyanarayana, The Open Notebook, January 2022.

    How Should Journalists Report a Scientific Study?
    Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

    What Journalists Get Wrong About Social Science: Full Responses
    Brian Resnick. Vox, January 2016.

    From The Journalist’s Resource

    8 Ways Journalists Can Access Academic Research for Free

    5 Things Journalists Need to Know About Statistical Significance

    5 Common Research Designs: A Quick Primer for Journalists

    5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

    Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

    What’s Standard Deviation? 4 Things Journalists Need to Know

    This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

    Share This Post

  • Eat More, Live Well – by Dr. Megan Rossi

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Often, eating healthily can feel restrictive. Don’t eat this, skip that, eliminate the other. Where is the joy?

    Dr. Megan Rossi brings a scientific angle on positive dieting, that is to say, looking at what to add, rather than what to subtract. Now, the idea isn’t to have sugar-laden chocolate cake with berries on top and call it a net positive because of the berries, though. Rather, Dr. Rossi lays out how to include as many diverse vegetables and fruits as possible, with tasty recipes so that we’re too busy with those to crave junk food.

    Speaking of recipes, there are 80, and they are easy to follow. She describes them as “plant-based”, and by this what she really means is “plant-centric” or such; she does include the use of some animal products.

    This is important to note, because general convention is to use “plant-based” to mean functionally vegan, but being about the food rather than the ideology; a relevant distinction in both society and science. In the case of this book, it’s neither, but it is very healthy.

    Bottom line: if you’d like to introduce more healthy diversity to your diet, rather than eating the same three fruits and five vegetables, but you’re not sure how, this book will get you where you need to be.

    Click here to check out Eat More, Live Well, and diversify your diet!

    Share This Post

  • Vaping: A Lot Of Hot Air?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Vaping: A Lot Of Hot Air?

    Yesterday, we asked you for your (health-related) opinions on vaping, and got the above-depicted, below-described, set of responses:

    • A little over a third of respondents said it’s actually more dangerous than smoking
    • A little under a third of respondents said it’s no better nor worse, just different
    • A little over 10% of respondents said it’s marginally less harmful, but still very bad
    • A little over 10% of respondents said it’s a much healthier alternative to smoking

    So what does the science say?

    Vaping is basically just steam inhalation, plus the active ingredient of your choice (e.g. nicotine, CBD, THC, etc): True or False?

    False! There really are a lot of other chemicals in there.

    And “chemicals” per se does not necessarily mean evil green glowing substances that a comicbook villain would market, but there are some unpleasantries in there too:

    So, the substrate itself can cause irritation, and flavorings (with cinnamaldehyde, the cinnamon flavoring, being one of the worst) can really mess with our body’s inflammatory and oxidative responses.

    Vaping can cause “popcorn lung”: True or False?

    True and False! Popcorn lung is so-called after it came to attention when workers at a popcorn factory came down with it, due to exposure to diacetyl, a chemical used there.

    That chemical was at that time also found in most vapes, but has since been banned in many places, including the US, Canada, the EU and the UK.

    Vaping is just as bad as smoking: True or False?

    False, per se. In fact, it’s recommended as a means of quitting smoking, by the UK’s famously thrifty NHS, that absolutely does not want people to be sick because that costs money:

    NHS | Vaping To Quit Smoking

    Of course, the active ingredients (e.g. nicotine, in the assumed case above) will still be the same, mg for mg, as they are for smoking.

    Vaping is causing a health crisis amongst “kids nowadays”: True or False?

    True—it just happens to be less serious on a case-by-case basis to the risks of smoking.

    However, it is worth noting that the perceived harmlessness of vapes is surely a contributing factor in their widespread use amongst young people—decades after actual smoking (thankfully) went out of fashion.

    On the other hand, there’s a flipside to this:

    Flavored vape restrictions lead to higher cigarette sales

    So, it may indeed be the case of “the lesser of two evils”.

    Want to know more?

    For a more in-depth science-ful exploration than we have room for here…

    BMJ | Impact of vaping on respiratory health

    Take care!

    Share This Post

Related Posts

  • Prostate Health: What You Should Know
  • Feel Great, Lose Weight – by Dr. Rangan Chatterjee

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    We all know that losing weight sustainably tends to be harder than simply losing weight. We know that weight loss needs to come with lifestyle change. But how to get there?

    One of the biggest problems that we might face while trying to lose weight is that our “metabolic thermostat” has got stuck at the wrong place. Trying to move it just makes our bodies think we are starving, and everything gets even worse. We can’t even “mind over matter” our way through it with willpower, because our bodies will do impressive things on a cellular level in an attempt to save us… Things that are as extraordinary as they are extraordinarily unhelpful.

    Dr. Rangan Chatterjee is here to help us cut through that.

    In this book, he covers how our metabolic thermostat got stuck in the wrong place, and how to gently tease it back into a better position.

    Some advices won’t be big surprises—go for a whole foods diet, avoiding processed food, for example. Probably not a shocker.

    Others are counterintuitive, but he explains how they work—exercising less while moving more, for instance. Sounds crazy, but we assure you there’s a metabolic explanation for it that’s beyond the scope of this review. And there’s plenty more where that came from, too.

    Bottom line: if your weight has been either slowly rising, or else very stable but at a higher point than you’d like, Dr. Chatterjee can help you move the bar back to where you want it—and keep it there.

    Click here to check out “Feel Great, Lose Weight” and reset your metabolic thermostat to its healthiest point!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Statistical Models vs. Front-Line Workers: Who Knows Best How to Spend Opioid Settlement Cash?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    MOBILE, Ala. — In this Gulf Coast city, addiction medicine doctor Stephen Loyd announced at a January event what he called “a game-changer” for state and local governments spending billions of dollars in opioid settlement funds.

    The money, which comes from companies accused of aggressively marketing and distributing prescription painkillers, is meant to tackle the addiction crisis.

    But “how do you know that the money you’re spending is going to get you the result that you need?” asked Loyd, who was once hooked on prescription opioids himself and has become a nationally known figure since Michael Keaton played a character partially based on him in the Hulu series “Dopesick.”

    Loyd provided an answer: Use statistical modeling and artificial intelligence to simulate the opioid crisis, predict which programs will save the most lives, and help local officials decide the best use of settlement dollars.

    Loyd serves as the unpaid co-chair of the Helios Alliance, a group that hosted the event and is seeking $1.5 million to create such a simulation for Alabama.

    The state is set to receive more than $500 million from opioid settlements over nearly two decades. It announced $8.5 million in grants to various community groups in early February.

    Loyd’s audience that gray January morning included big players in Mobile, many of whom have known one another since their school days: the speaker pro tempore of Alabama’s legislature, representatives from the city and the local sheriff’s office, leaders from the nearby Poarch Band of Creek Indians, and dozens of addiction treatment providers and advocates for preventing youth addiction.

    Many of them were excited by the proposal, saying this type of data and statistics-driven approach could reduce personal and political biases and ensure settlement dollars are directed efficiently over the next decade.

    But some advocates and treatment providers say they don’t need a simulation to tell them where the needs are. They see it daily, when they try — and often fail — to get people medications, housing, and other basic services. They worry allocating $1.5 million for Helios prioritizes Big Tech promises for future success while shortchanging the urgent needs of people on the front lines today.

    “Data does not save lives. Numbers on a computer do not save lives,” said Lisa Teggart, who is in recovery and runs two sober living homes in Mobile. “I’m a person in the trenches,” she said after attending the Helios event. “We don’t have a clean-needle program. We don’t have enough treatment. … And it’s like, when is the money going to get to them?”

    The debate over whether to invest in technology or boots on the ground is likely to reverberate widely, as the Helios Alliance is in discussions to build similar models for other states, including West Virginia and Tennessee, where Loyd lives and leads the Opioid Abatement Council.

    New Predictive Promise?

    The Helios Alliance comprises nine nonprofit and for-profit organizations, with missions ranging from addiction treatment and mathematical modeling to artificial intelligence and marketing. As of mid-February, the alliance had received $750,000 to build its model for Alabama.

    The largest chunk — $500,000 — came from the Poarch Band of Creek Indians, whose tribal council voted unanimously to spend most of its opioid settlement dollars to date on the Helios initiative. A state agency chipped in an additional $250,000. Ten Alabama cities and some private foundations are considering investing as well.

    Stephen McNair, director of external affairs for Mobile, said the city has an obligation to use its settlement funds “in a way that is going to do the most good.” He hopes Helios will indicate how to do that, “instead of simply guessing.”

    Rayford Etherton, a former attorney and consultant from Mobile who created the Helios Alliance, said he is confident his team can “predict the likely success or failure of programs before a dollar is spent.”

    The Helios website features a similarly bold tagline: “Going Beyond Results to Predict Them.”

    To do this, the alliance uses system dynamics, a mathematical modeling technique developed at the Massachusetts Institute of Technology in the 1950s. The Helios model takes in local and national data about addiction services and the drug supply. Then it simulates the effects different policies or spending decisions can have on overdose deaths and addiction rates. New data can be added regularly and new simulations run anytime. The alliance uses that information to produce reports and recommendations.

    Etherton said it can help officials compare the impact of various approaches and identify unintended consequences. For example, would it save more lives to invest in housing or treatment? Will increasing police seizures of fentanyl decrease the number of people using it or will people switch to different substances?

    And yet, Etherton cautioned, the model is “not a crystal ball.” Data is often incomplete, and the real world can throw curveballs.

    Another limitation is that while Helios can suggest general strategies that might be most fruitful, it typically can’t predict, for instance, which of two rehab centers will be more effective. That decision would ultimately come down to individuals in charge of awarding contracts.

    Mathematical Models vs. On-the-Ground Experts

    To some people, what Helios is proposing sounds similar to a cheaper approach that 39 states — including Alabama — already have in place: opioid settlement councils that provide insights on how to best use the money. These are groups of people with expertise ranging from addiction medicine and law enforcement to social services and personal experience using drugs.

    Even in places without formal councils, treatment providers and recovery advocates say they can perform a similar function. Half a dozen advocates in Mobile told KFF Health News the city’s top need is low-cost housing for people who want to stop using drugs.

    “I wonder how much the results” from the Helios model “are going to look like what people on the ground doing this work have been saying for years,” said Chance Shaw, director of prevention for AIDS Alabama South and a person in recovery from opioid use disorder.

    But Loyd, the co-chair of the Helios board, sees the simulation platform as augmenting the work of opioid settlement councils, like the one he leads in Tennessee.

    Members of his council have been trying to decide how much money to invest in prevention efforts versus treatment, “but we just kind of look at it, and we guessed,” he said — the way it’s been done for decades. “I want to know specifically where to put the money and what I can expect from outcomes.”

    Jagpreet Chhatwal, an expert in mathematical modeling who directs the Institute for Technology Assessment at Massachusetts General Hospital, said models can reduce the risk of individual biases and blind spots shaping decisions.

    If the inputs and assumptions used to build the model are transparent, there’s an opportunity to instill greater trust in the distribution of this money, said Chhatwal, who is not affiliated with Helios. Yet if the model is proprietary — as Helios’ marketing materials suggest its product will be — that could erode public trust, he said.

    Etherton, of the Helios Alliance, told KFF Health News, “Everything we do will be available publicly for anyone who wants to look at it.”

    Urgent Needs vs. Long-Term Goals

    Helios’ pitch sounds simple: a small upfront cost to ensure sound future decision-making. “Spend 5% so you get the biggest impact with the other 95%,” Etherton said.

    To some people working in treatment and recovery, however, the upfront cost represents not just dollars, but opportunities lost for immediate help, be it someone who couldn’t find an open bed or get a ride to the pharmacy.

    “The urgency of being able to address those individual needs is vital,” said Pamela Sagness, executive director of the North Dakota Behavioral Health Division.

    Her department recently awarded $7 million in opioid settlement funds to programs that provide mental health and addiction treatment, housing, and syringe service programs because that’s what residents have been demanding, she said. An additional $52 million in grant requests — including an application from the Helios Alliance — went unfunded.

    Back in Mobile, advocates say they see the need for investment in direct services daily. More than 1,000 people visit the office of the nonprofit People Engaged in Recovery each month for recovery meetings, social events, and help connecting to social services. Yet the facility can’t afford to stock naloxone, a medication that can rapidly reverse overdoses.

    At the two recovery homes that Mobile resident Teggart runs, people can live in a drug-free space at a low cost. She manages 18 beds but said there’s enough demand to fill 100.

    Hannah Seale felt lucky to land one of those spots after leaving Mobile County jail last November.

    “All I had with me was one bag of clothes and some laundry detergent and one pair of shoes,” Seale said.

    Since arriving, she’s gotten her driver’s license, applied for food stamps, and attended intensive treatment. In late January, she was working two jobs and reconnecting with her 4- and 7-year-old daughters.

    After 17 years of drug use, the recovery home “is the one that’s worked for me,” she said.

    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    USE OUR CONTENT

    This story can be republished for free (details).

    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    Subscribe to KFF Health News’ free Morning Briefing.

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Managing Jealousy

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Jealousy is often thought of as a young people’s affliction, but it can affect us at any age—whether we are the one being jealous, or perhaps a partner.

    And, the “green-eyed monster” can really ruin a lot of things; relationships, friendships, general happiness, physical health even (per stress and anxiety and bad sleep), and more.

    The thing is, jealousy looks like one thing, but is actually mostly another.

    Jealousy is a Scooby-Doo villain

    That is to say: we can unmask it and see what much less threatening thing is underneath. Which is usually nothing more nor less than: insecurities

    • Insecurity about losing one’s partner
    • Insecurity about not being good enough
    • Insecurity about looking bad socially

    …etc. The latter, by the way, is usually the case when one’s partner is socially considered to be giving cause for jealousy, but the primary concern is not actually relational loss or any kind of infidelity, but rather, looking like one cannot keep one’s partner’s full attention romantically/sexually. This drives a lot of people to act on jealousy for the sake of appearances, in situations where they might otherwise, if they didn’t feel like they’d be adversely judged for it, be considerably more chill.

    Thus, while monogamy certainly has its fine merits, there can also be a kind of “toxic monogamy” at hand, where a relationship becomes unhealthy because one partner is just trying to live up to social expectations of keeping the other partner in check.

    This, by the way, is something that people in polyamorous and/or open relationships typically handle quite neatly, even if a lot of the following still applies. But today, we’re making the statistically safe assumption of a monogamous relationship, and talking about that!

    How to deal with the social aspect

    If you sit down with your partner and work out in advance the acceptable parameters of your relationship, you’ll be ahead of most people already. For example…

    • What counts as cheating? Is it all and any sex acts with all and any people? If not, where’s the line?
    • What about kissing? What about touching other body parts? If there are boundaries that are important to you, talk about them. Nothing is “too obvious” because it’s astonishing how many times it will happen that later someone says (in good faith or not), “but I thought…”
    • What about being seen in various states of undress? Or seeing other people in various states of undress?
    • Is meaningless flirting between friends ok, and if so, how do we draw the line with regard to what is meaningless? And how are we defining flirting, for that matter? Talk about it and ensure you are both on the same page.
    • If a third party is possibly making moves on one of us under the guise of “just being friendly”, where and how do we draw the line between friendliness and romantic/sexual advances? What’s the difference between a lunch date with a friend and a romantic meal out for two, and how can we define the difference in a way that doesn’t rely on subjective “well I didn’t think it was romantic”?

    If all this seems like a lot of work, please bear in mind, it’s a lot more fun to cover this cheerfully as a fun couple exercise in advance, than it is to argue about it after the fact!

    See also: Boundary-Setting Beyond “No”

    How to deal with the more intrinsic insecurities

    For example, when jealousy is a sign of a partner fearing not being good enough, not measuring up, or perhaps even losing their partner.

    The key here might not shock you: communication

    Specifically, reassurance. But critically, the correct reassurance!

    A partner who is jealous will often seek the wrong reassurance, for example wanting to read their partner’s messages on their phone, or things like that. And while a natural desire when experiencing jealousy, it’s not actually helpful. Because while incriminating messages could confirm infidelity, it’s impossible to prove a negative, and if nothing incriminating is found, the jealous partner can just go on fearing the worst regardless. After all, their partner could have a burner phone somewhere, or a hidden app for cheating, or something else like that. So, no reassurance can ever be given/gained by such requests (which can also become unpleasantly controlling, which hopefully nobody wants).

    A quick note on “if you have nothing to fear, you have nothing to hide”: rhetorically that works, but practically it doesn’t.

    Writer’s example: when my late partner and I formalized our relationship, we discussed boundaries, and I expressed “so far as I am concerned, I have no secrets from you, except secrets that are not mine to share. For example, if someone has confided in me and asked that I not share it, I won’t. Aside from that, you have access-all-areas in my life; me being yours has its privileges” and this policy itself would already pre-empt any desire to read my messages. Now indeed, I had nothing to hide. I am by character devoted to a fault. But my friends may well sometimes have things they don’t want me to share, which made that a necessary boundary to highlight (which my partner, an absolute angel by the way and not overly prone to jealousy in any case, understood completely).

    So, it is best if the partner of a jealous person can explain the above principles as necessary, and offer the correct reassurance instead. Which could be any number of things, but for example:

    • I am yours, and nobody else has a chance
    • I fully intend to stay with you for life
    • You are the best partner I have ever had
    • Being with you makes my life so much better

    …etc. Note that none of these are “you don’t have to worry about so-and-so”, or “I am not cheating on you”, etc, because it’s about yours and your partner’s relationship. If they ask for reassurances with regard to other people or activities, by all means state them as appropriate, but try to keep the focus on you two.

    And if your partner (or you, if it’s you who’s jealous) can express the insecurity in the format…

    “I’m afraid of _____ because _____”

    …then the “because” will allow for much more specific reassurance. We all have insecurities, we all have reasons we might fear not being good enough for our partner, or losing their affection, and the best thing we can do is choose to trust our partners at least enough to discuss those fears openly with each other.

    See also: Save Time With Better Communication ← this can avoid a lot of time-consuming arguments

    What about if the insecurity is based in something demonstrably correct?

    By this we mean, something like a prior history of cheating, or other reasons for trust issues. In such a case, the jealous partner may well have a reason for their jealousy that isn’t based on a personal insecurity.

    In our previous article about boundaries, we talked about relationships (romantic or otherwise) having a “price of entry”. In this case, you each have a “price of entry”:

    • The “price of entry” to being with the person who has previously cheated (or similar), is being able to accept that.
    • And for the person who cheated (or similar), very likely their partner will have the “price of entry” of “don’t do that again, and also meanwhile accept in good grace that I might be jittery about it”.

    And, if the betrayal of trust was something that happened between the current partners in the current relationship, most likely that was also traumatic for the person whose trust was betrayed. Many people in that situation find that trust can indeed be rebuilt, but slowly, and the pain itself may also need treatment (such as therapy and/or couples therapy specifically).

    See also: Relationships: When To Stick It Out & When To Call It Quits ← this covers both sides

    And finally, to finish on a happy note:

    Only One Kind Of Relationship Promotes Longevity This Much!

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: