How do science journalists decide whether a psychology study is worth covering?

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

But there’s nuance to the findings, the authors note.

“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

More on the study’s findings

The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

For instance, one of the vignettes reads:

“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

Considering statistical significance

When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

  • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
  • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
  • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
  • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

What other research shows about science journalists

A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

Advice for journalists

We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

1. Examine the study before reporting it.

Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

2. Zoom in on data.

Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

3. Talk to scientists not involved in the study.

If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

4. Remember that a single study is simply one piece of a growing body of evidence.

“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

5. Remind readers that science is always changing.

“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

Additional reading

Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.

Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.

How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.

From The Journalist’s Resource

8 Ways Journalists Can Access Academic Research for Free

5 Things Journalists Need to Know About Statistical Significance

5 Common Research Designs: A Quick Primer for Journalists

5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

What’s Standard Deviation? 4 Things Journalists Need to Know

This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • From immunotherapy to mRNA vaccines – the latest science on melanoma treatment explained
  • The Healthiest Bread Recipe You’ll Probably Find
    Discover the ultimate bread recipe that not only satisfies your taste buds but also nourishes your body with wholesome ingredients.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Raspberries vs Blackberries – Which is Healthier?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Our Verdict

    When comparing raspberries to blackberries, we picked the blackberries.

    Why?

    It was very close! Raspberries most certainly also have their merits. But blackberries do just a little bit better in a few categories:

    In terms of macros, raspberries have a tiny bit more carbs and fiber, while blackberries have a even tinier bit more protein, and the two berries have an equal glycemic index. We’ll call this category a tie, or else the meanest of nominal wins for raspberry.

    In the category of vitamins, raspberries have more of vitamins B1, B2, B5, B6, and choline, while blackberries have more of vitamins A, B3, B9, C, E, and K. This would be a very marginal win for blackberries, except that blackberries have more than 6x the vitamin A, a much larger margin than any of the other differences in vitamins (which were usually small differences), which gives blackberry a more convincing win here.

    When it comes to minerals, things are closer: raspberries have more iron, magnesium, manganese, and phosphorus, while blackberries have more calcium, copper, potassium, selenium, and zinc. None of the differences are outstanding, so this is a simple marginal victory for blackberries.

    It would be rude to look at berries without noting their polyphenols; we’re not list them all (or this article will get very long, because each has very many polyphenols with names like “pelargonidin 3-O-glucosyl-rutinoside” and so forth), but suffice it to say: raspberries are great for polyphenols and blackberries are even better for polyphenols.

    That said… In the category of specific polyphenols we’ve written about before at 10almonds, it’s worth noting a high point of each berry, for the sake of fairness: raspberries have more quercetin (but blackberries have lots too) and blackberries have more ellagic acid (of which, raspberries have some, but not nearly as much). Anyway, just going off total polyphenol content, blackberries are the clear winner here.

    Adding up the sections makes for an overall win for blackberries, but by all means, enjoy either or both; diversity is good!

    Want to learn more?

    You might like to read:

    21 Most Beneficial Polyphenols & What Foods Have Them

    Enjoy!

    Share This Post

  • Overcoming Gravity – by Steven Low

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    The author, a professional gymnast and coach with a background in the sciences, knows his stuff here. This is what it says on the tin: it’s rigorously systematic. It’s also the most science-based calisthenics book this reviewer has read to date.

    If you just wanted to know how to do some exercises, then this book would be very much overkill, but if you want to be able to go from no knowledge to expert knowledge, then the nearly 600 pages of this weighty tome will do that for you.

    This is a textbook, it’s a “the bible of…” style book, it’s the one that if you’re serious, will engage you thoroughly and enable you to craft the calisthenics-forged body you want, head to toe.

    As if it weren’t already overdelivering, it also has plenty of information on injury avoidance (or injury/condition management if you have some existing injury or chronic condition), and building routines in a dynamic fashion that avoids becoming a grind, because it’s going from strength to strength while cycling through different body parts.

    Bottom line: if you’d like to get serious about calisthenics, then this is the book for you.

    Click here to check out Overcoming Gravity, and do just that!

    Share This Post

  • From Dr. Oz to Heart Valves: A Tiny Device Charted a Contentious Path Through the FDA

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    In 2013, the FDA approved an implantable device to treat leaky heart valves. Among its inventors was Mehmet Oz, the former television personality and former U.S. Senate candidate widely known as “Dr. Oz.”

    In online videos, Oz has called the process that brought the MitraClip device to market an example of American medicine firing “on all cylinders,” and he has compared it to “landing a man on the moon.”

    MitraClip was designed to spare patients from open-heart surgery by snaking hardware into the heart through a major vein. Its manufacturer, Abbott, said it offered new hope for people severely ill with a condition called mitral regurgitation and too frail to undergo surgery.

    “It changed the face of cardiac medicine,” Oz said in a video.

    But since MitraClip won FDA approval, versions of the device have been the subject of thousands of reports to the agency about malfunctions or patient injuries, as well as more than 1,100 reports of patient deaths, FDA records show. Products in the MitraClip line have been the subject of three recalls. A former employee has alleged in a federal lawsuit that Abbott promoted the device through illegal inducements to doctors and hospitals. The case is pending, and Abbott has denied illegally marketing the device.

    The MitraClip story is, in many ways, a cautionary tale about the science, business, and regulation of medical devices.

    Manufacturer-sponsored research on the device has long been questioned. In 2013, an outside adviser to the FDA compared some of the data marshaled in support of its approval to “poop.”

    The FDA expanded its approval of MitraClip to a wider set of patients in 2019, based on a clinical trial in which Abbott was deeply involved and despite conflicting findings from another study.

    In the three recalls, the first of which warned of potentially deadly consequences, neither the manufacturer nor the FDA withdrew inventory from the market. The company told doctors it was OK for them to continue using the recalled products.

    In response to questions for this article, both Abbott and the FDA described MitraClip as safe and effective.

    “With MitraClip, we’re addressing the needs of people with MR who often have no other options,” Abbott spokesperson Brent Tippen said. “Patients suffering from mitral regurgitation have severely limited quality of life. MitraClip can significantly improve survival, freedom for hospitalization and quality of life via a minimally invasive, now common procedure.”

    An FDA spokesperson, Audra Harrison, said patient safety “is the FDA’s highest priority and at the forefront of our work in medical device regulation.”

    She said reports to the FDA about malfunctions, injuries, and deaths that the device may have caused or contributed to are “consistent” with study results the FDA reviewed for its 2013 and 2019 approvals.

    In other words: They were expected.

    Inspiration in Italy

    When a person has mitral regurgitation, blood flows backward through the mitral valve. Severe cases can lead to heart failure.

    With MitraClip, flaps of the valve — known as “leaflets” — are clipped together at one or more points to achieve a tighter seal when they close. The clips are deployed via a catheter threaded through a major vein, typically from an incision in the groin. The procedure offers an alternative to connecting the patient to a heart-lung machine and repairing or replacing the mitral valve in open-heart surgery.

    Oz has said in online videos that he got the idea after hearing a doctor describe a surgical technique for the mitral valve at a conference in Italy. “And on the way home that night, on a plane heading back to Columbia University, where I was on the faculty, I wrote the patent,” he told KFF Health News.

    A patent obtained by Columbia in 2001, one of several associated with MitraClip, lists Oz first among the inventors.

    But a Silicon Valley-based startup, Evalve, would develop the device. Evalve was later acquired by Abbott for about $400 million.

    “I think the engineers and people at Evalve always cringe a little bit when they see Mehmet taking a lot of, you know, basically claiming responsibility for what was a really extraordinary team effort, and he was a small to almost no player in that team,” one of the company’s founders, cardiologist Fred St. Goar, told KFF Health News.

    Oz did not respond to a request for comment on that statement.

    As of 2019, the MitraClip device cost $30,000 per procedure, according to an article in a medical journal. According to the Abbott website, more than 200,000 people around the world have been treated with MitraClip.

    Oz filed a financial disclosure during his unsuccessful run for the U.S. Senate in 2022 that showed him receiving hundreds of thousands of dollars in annual MitraClip royalties.

    Abbott recently received FDA approval for TriClip, a variation of the MitraClip system for the heart’s tricuspid valve.

    Endorsed ‘With Trepidation’

    Before the FDA said yes to MitraClip in 2013, agency staffers pushed back.

    Abbott had originally wanted the device approved for “patients with significant mitral regurgitation,” a relatively broad term. After the FDA objected, the company narrowed its proposal to patients at too-high risk for open-heart surgery.

    Even then, in an analysis, the FDA identified “fundamental” flaws in Abbott’s data.

    One example: The data compared MitraClip patients with patients who underwent open-heart surgery for valve repair — but the comparison might have been biased by differences in the expertise of doctors treating the two groups, the FDA analysis said. While MitraClip was implanted by a highly select, experienced group of interventional cardiologists, many of the doctors doing the open-heart surgeries had performed only a “very low volume” of such operations.

    FDA “approval is not appropriate at this time as major questions of safety and effectiveness, as well as the overall benefit-risk profile for this device, remain unanswered,” the FDA said in a review prepared for a March 2013 meeting of a committee of outside advisers to the agency.

    Some committee members expressed misgivings. “If your right shoe goes into horse poop and your left shoe goes into dog poop, it’s still poop,” cardiothoracic surgeon Craig Selzman said, according to a transcript.

    The committee voted 5-4 against MitraClip on the question of whether it proved effective. But members voted 8-0 that they considered the device safe and 5-3 that the benefits of the device outweighed its risks.

    Selzman voted yes on the last question “with trepidation,” he said at the time.

    In October 2013, the FDA approved the MitraClip Clip Delivery System for a narrower group of patients: those with a particular type of mitral regurgitation who were considered a surgery risk.

    “The reality is, there is no perfect procedure,” said Jason Rogers, an interventional cardiologist and University of California-Davis professor who is an Abbott consultant. The company referred KFF Health News to Rogers as an authority on MitraClip. He called MitraClip “extremely safe” and said some patients treated with it are “on death’s door to begin with.”

    “At least you’re trying to do something for them,” he said.

    Conflicting Studies

    In 2019, the FDA expanded its approval of MitraClip to a wider set of patients.

    The agency based that decision on a clinical trial in the United States and Canada that Abbott not only sponsored but also helped design and manage. It participated in site selection and data analysis, according to a September 2018 New England Journal of Medicine paper reporting the trial results. Some of the authors received consulting fees from Abbott, the paper disclosed.

    A separate study in France reached a different conclusion. It found that, for some patients who fit the expanded profile, the device did not significantly reduce deaths or hospitalizations for heart failure over a year.

    The French study, which appeared in the New England Journal of Medicine in August 2018, was funded by the government of France and Abbott. As with the North American study, some of the researchers disclosed they had received money from Abbott. However, the write-up in the journal said Abbott played no role in the design of the French trial, the selection of sites, or in data analysis.

    Gregg Stone, one of the leaders of the North American study, said there were differences between patients enrolled in the two studies and how they were medicated. In addition, outcomes were better in the North American study in part because doctors in the U.S. and Canada had more MitraClip experience than their counterparts in France, Stone said.

    Stone, a clinical trial specialist with a background in interventional cardiology, acknowledged skepticism toward studies sponsored by manufacturers.

    “There are some people who say, ‘Oh, well, you know, these results may have been manipulated,’” he said. “But I can guarantee you that’s not the truth.”

    ‘Nationwide Scheme’

    A former Abbott employee alleges in a lawsuit that after MitraClip won approval, the company promoted the device to doctors and hospitals using inducements such as free marketing support, the chance to participate in Abbott clinical trials, and payments for participating in “sham speaker programs.”

    The former employee alleges that she was instructed to tell referring physicians that if they observed mitral regurgitation in their patients to “just send it” for a MitraClip procedure because “everything can be clipped.” She also alleges that, using a script, she was told to promote the device to hospital administrators based on financial advantages such as “growth opportunities through profitable procedures, ancillary tests, and referral streams.”

    The inducements were part of a “nationwide scheme” of illegal kickbacks that defrauded government health insurance programs including Medicare and Medicaid, the lawsuit claims.

    The company denied doing anything illegal and said in a court filing that “to help its groundbreaking therapy reach patients, Abbott needed to educate cardiologists and other healthcare providers.”

    Those efforts are “not only routine, they are laudable — as physicians cannot use, or refer a patient to another doctor who can use, a device that they do not understand or in some cases even know about,” the company said in the filing.

    Under federal law, the person who filed the suit can receive a share of any money the government recoups from Abbott. The suit was filed by a company associated with a former employee in Abbott’s Structural Heart Division, Lisa Knott. An attorney for the company declined to comment and said Knott had no comment.

    Reports to the FDA

    As doctors started using MitraClip, the FDA began receiving reports about malfunctions and cases in which the product might have caused or contributed to a death or an injury.

    According to some reports, clips detached from valve flaps. Flaps became damaged. Procedures were aborted. Mitral leakage worsened. Doctors struggled to control the device. Clips became “entangled in chordae” — cord-like structures also known as heartstrings that connect the valve flaps to the heart muscle. Patients treated with MitraClip underwent corrective operations.

    As of March 2024, the FDA had received more than 17,000 reports documenting more than 22,000 “events” involving mitral valve repair devices, FDA data shows. All but about 200 of those reports mention one iteration of MitraClip or another, a KFF Health News review of FDA data found.

    Almost all the reports came from Abbott. The FDA requires manufacturers to submit reports when they learn of mishaps potentially related to their devices.

    The reports are not proof that devices caused problems, and the same event might be reported multiple times. Other events may go unreported.

    Despite the reports’ limitations, the FDA provides an analysis of them for the public on its website.

    MitraClip’s risks weren’t a surprise.

    Like the rapid-fire fine print in television ads for prescription drugs, the original product label for the device listed more than 60 types of potential complications.

    Indeed, during clinical research on the device, about 6% of patients implanted with MitraClip died within 30 days, according to the label. Almost 1 in 4 — 23.6% – were dead within a year.

    The FDA spokesperson, Harrison, pointed to a study originally published in 2021 in The Annals of Thoracic Surgery, based on a central registry of mitral valve procedures, that found lower rates of death after MitraClip went on the market.

    “These data confirmed that the MitraClip device remains safe and effective in the real-world setting,” Harrison said.

    But the study’s authors, several of whom disclosed financial or other connections to Abbott, said data was missing for more than a quarter of patients one year after the procedure.

    A major measure of success would be the proportion of MitraClip patients who are alive “with an acceptable quality of life” a year after undergoing the procedure, the study said. Because such information was available for fewer than half of the living patients, “we have omitted those outcomes from this report,” the authors wrote.

    If you’ve had an experience with MitraClip or another medical device and would like to tell KFF Health News about it, click here to share your story with us.

    KFF Health News audience engagement producer Tarena Lofton contributed to this report.

    KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF—an independent source of health policy research, polling, and journalism. Learn more about KFF.

    Subscribe to KFF Health News’ free Morning Briefing.

    Share This Post

Related Posts

  • From immunotherapy to mRNA vaccines – the latest science on melanoma treatment explained
  • Easy Quinoa Falafel

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Falafel is a wonderful snack or accompaniment to a main, and if you’ve only had shop-bought, you’re missing out. Plus, with this quinoa-based recipe, it’s almost impossible to accidentally make them dry.

    You will need

    • 1 cup cooked quinoa
    • 1 cup chopped fresh parsley
    • ½ cup wholewheat breadcrumbs (or rye breadcrumbs if you’re avoiding wheat/gluten)
    • 1 can chickpeas, drained
    • 4 green onions, chopped
    • ½ bulb garlic, minced
    • 2 tbsp extra virgin olive oil, plus more for frying
    • 2 tbsp tomato paste
    • 1 tbsp apple cider vinegar
    • 2 tsp nutritional yeast
    • 2 tsp ground cumin
    • 1 tsp red pepper flakes
    • 1 tsp black pepper, coarse ground
    • 1 tsp dried thyme
    • ½ tsp MSG or 1 tsp low-sodium salt

    Method

    (we suggest you read everything at least once before doing anything)

    1) Blend all the ingredients in a food processor until it has an even, but still moderately coarse, texture.

    2) Shape into 1″ balls, and put them in the fridge to chill for about 20 minutes.

    3) Fry the balls over a medium-high heat until evenly browned—just do a few at a time, taking care to not overcrowd the pan.

    4) Serve! Great with salad, hummus, and other such tasty healthy snack items:

    Enjoy!

    Want to learn more?

    For those interested in more of what we have going on today:

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • The Reason You’re Alone

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    If you are feeling lonely, then there are likely reasons why, as Kurtzgesagt explains:

    Why it happens and how to fix it

    Many people feel lonely and disconnected, often not knowing how to make new friends. And yet, social connection strongly predicts happiness, while lack of it is linked to diseases and a shorter life.

    One mistake that people make is thinking it has to be about shared interests; that can help, but proximity and shared time are much more important.

    Another stumbling block for many is that adult responsibilities and distractions (work, kids, technology) often take priority over friendships—but loneliness is surprisingly highest among young people, worsened by the pandemic’s impact on social interactions.

    And even when friendships are made, they fade without attention, often accidentally, impacting both people involved. Other friendships can be lost following big life changes such as moving house or the end of a relationship. And for people above a certain advanced age, friendship groups can shrink due to death, if one’s friends are all in the same age group.

    But, all is not lost. We can make friends with people of any age, and old friendships can be revived by a simple invitation. We can also take a “build it and they will come” approach, by organizing events and being the one who invites others.

    It’s easy to fear rejection—most people do—but it’s worth overcoming for the potential rewards. That said, building friendships requires time, patience, caring about others, and being open about yourself, which can involve a degree of vulnerability too.

    In short: be laid-back while still prioritizing friendships, show genuine interest, and stay open to social opportunities.

    For more on all of this, enjoy:

    Click Here If The Embedded Video Doesn’t Load Automatically!

    Want to learn more?

    You might also like to read:

    How To Beat Loneliness & Isolation

    Take care!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • The Skincare Bible − by Dr. Anjali Mahto

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    The subtitle claims this to be a “no-nonsense guide to great skin”, and while subtitle claims can often wildly overstate what’s being delivered, in this case, the book really is a no-nonsense guide to great skin.

    The author is a dermatologist, and as such she speaks from her professional knowledge and experience, which is a lot more reliable than someone’s latest hack on TikTok.

    She gives a quick crash course on what skin actually is and how it works, giving time to genetic considerations, cellular matters, and the grander-scale physical issues at hand, as well as what things affect it and how, ranging from diet to UV light to hormones and more.

    We also get a clear explanation of regular skincare as well as specific skin concerns, ranging from minor inconveniences to skin cancer.

    You may wonder if she covers anti-aging treatments, and yes, she does.

    The style is (as indeed promised by the subtitle) no-nonsense, insofar as it’s straight to the point, no hype, and no padding, just plenty of information-dense content while still being very readable.

    Bottom line: if you’d like to seriously look after your skin but aren’t a fan of every latest trend, this book will be a welcome guide.

    Click here to check out The Skincare Bible, and enjoy great skin!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: