The Subtle Art of Not Giving a F*ck – by Mark Manson

10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

You may wonder from the title: is this book arguing that we should all be callous heartless monsters? And no, it is not.

Instead, author Mark Manson advocates for cynicism, but less in the manner of Scrooge, and more in the manner of Diogenes:

  • That life will involve struggle, so we might as well at least choose our struggles.
  • That we will make mistakes, so we might as well accept them as learning experiences.
  • That we will love and we will lose, so we might as well do it right while we can.

In short, the book is less about not caring… And more about caring about the right things only.

So, what are “the right things”? Manson bids us decide for ourselves, but certainly has ideas and pointers, with regard to what may or may not be healthy values to pursue.

The style throughout is casual and almost conversational, without being overly padded. It makes for very easy reading.

If the book has a weak point, it’s that when it briefly makes a suprisingly prescriptive turn into recommending we take up Buddhism, it may feel a bit like our friend who wants us to join in the latest MLM scheme. But, he’s soon back on track.

Bottom line: if you ever find yourself stressed with living up to unwanted expectations—your own, other people’s, and society’s—this book can really help streamline things.

Click here to check out The Subtle Art of Not Giving a F*ck, and put your attention where it makes more of a positive difference!

Don’t Forget…

Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

Recommended

  • Shoulders Range – by Elia Bartolini
  • Foods Linked To Urinary Incontinence In Middle-Age (& Foods That Avert It)
    Studying 1,098 women, researchers link diet to pelvic floor disorders; mindful eating reduces incontinence risks by 20%, while ready-made foods increase it by 50%.

Learn to Age Gracefully

Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • The Diabetes Drugs That Can Cut Asthma Attacks By 70%

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Asthma, obesity, and type 2 diabetes are closely linked, with the latter two greatly increasing asthma attack risk.

    While bronchodilators / corticosteroids can have immediate adverse effects due to sympathetic nervous system activation, and lasting adverse effects due to the damage it does to metabolic health, diabetes drugs, on the other hand, can improve things with (for most people) fewer unwanted side effects.

    Great! Which drugs?

    Metformin, and glucagon-like peptide-1 receptor agonists (GLP-1RAs).

    Specifically, researchers have found:

    • Metformin is associated with a 30% reduction in asthma attacks
    • GLP-1RAs are associated with a 40% reduction in asthma attacks

    …and yes, they stack, making for a 70% reduction in the case of people taking both. Furthermore, the results are independent of weight, glycemic control, or asthma phenotype.

    In terms of what was counted, the primary outcome was asthma attacks at 12-month follow-up, defined by oral corticosteroid use, emergency visits, hospitalizations, or death.

    The effect of metformin on asthma attacks was not affected by BMI, HbA1c levels, eosinophil count, asthma severity, or sex.

    Of the various extra antidiabetic drugs trialled in this study, only GLP-1 receptor agonists showed a further and sustained reduction in asthma attacks.

    Here’s the study itself, hot off the press, published on Monday:

    JAMA Int. Med. | Antidiabetic Medication and Asthma Attacks

    “But what if I’m not diabetic?”

    Good news:

    More than half of all US adults are eligible for semaglutide therapy ← this is because they’ve expanded the things that semaglutide (the widely-used GLP-1 receptor agonist drug) can be prescribed for, now going beyond just diabetes and/or weight loss 😎

    And metformin, of course, is more readily available than semaglutide, so by all means speak with your doctor/pharmacist about that, if it’s of interest to you.

    Take care!

    Share This Post

  • How do science journalists decide whether a psychology study is worth covering?

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.

    Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.

    But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.

    Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.

    The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.

    University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.

    But there’s nuance to the findings, the authors note.

    “I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.

    Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.

    Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.

    “Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)

    “This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.

    More on the study’s findings

    The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.

    “As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.

    Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”

    The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.

    Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.

    Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.

    “Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.

    Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.

    For instance, one of the vignettes reads:

    “Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”

    In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”

    Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.

    Considering statistical significance

    When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.

    Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.

    “Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.

    Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.

    In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:

    • “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
    • “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
    • “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
    • “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”

    Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”

    What other research shows about science journalists

    A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”

    A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.

    More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.

    Advice for journalists

    We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:

    1. Examine the study before reporting it.

    Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.

    Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”

    How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.

    Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.

    “Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.

    Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.

    Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.

    Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology,Predatory Journals: What They Are and How to Avoid Them.”

    2. Zoom in on data.

    Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”

    What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.

    But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.

    How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.

    Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.

    3. Talk to scientists not involved in the study.

    If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.

    Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.

    4. Remember that a single study is simply one piece of a growing body of evidence.

    “I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”

    Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.

    Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.

    “We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”

    5. Remind readers that science is always changing.

    “Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”

    Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”

    Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could. 

    The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”

    Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”

    Additional reading

    Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
    Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.

    The Problem with Psychological Research in the Media
    Steven Stosny. Psychology Today, September 2022.

    Critically Evaluating Claims
    Megha Satyanarayana, The Open Notebook, January 2022.

    How Should Journalists Report a Scientific Study?
    Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.

    What Journalists Get Wrong About Social Science: Full Responses
    Brian Resnick. Vox, January 2016.

    From The Journalist’s Resource

    8 Ways Journalists Can Access Academic Research for Free

    5 Things Journalists Need to Know About Statistical Significance

    5 Common Research Designs: A Quick Primer for Journalists

    5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct

    Percent Change versus Percentage-Point Change: What’s the Difference? 4 Tips for Avoiding Math Errors

    What’s Standard Deviation? 4 Things Journalists Need to Know

    This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.

    Share This Post

  • Alzheimer’s may have once spread from person to person, but the risk of that happening today is incredibly low

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    An article published this week in the prestigious journal Nature Medicine documents what is believed to be the first evidence that Alzheimer’s disease can be transmitted from person to person.

    The finding arose from long-term follow up of patients who received human growth hormone (hGH) that was taken from brain tissue of deceased donors.

    Preparations of donated hGH were used in medicine to treat a variety of conditions from 1959 onwards – including in Australia from the mid 60s.

    The practice stopped in 1985 when it was discovered around 200 patients worldwide who had received these donations went on to develop Creuztfeldt-Jakob disease (CJD), which causes a rapidly progressive dementia. This is an otherwise extremely rare condition, affecting roughly one person in a million.

    What’s CJD got to do with Alzehimer’s?

    CJD is caused by prions: infective particles that are neither bacterial or viral, but consist of abnormally folded proteins that can be transmitted from cell to cell.

    Other prion diseases include kuru, a dementia seen in New Guinea tribespeople caused by eating human tissue, scrapie (a disease of sheep) and variant CJD or bovine spongiform encephalopathy, otherwise known as mad cow disease. This raised public health concerns over the eating of beef products in the United Kingdom in the 1980s.

    Human growth hormone used to come from donated organs

    Human growth hormone (hGH) is produced in the brain by the pituitary gland. Treatments were originally prepared from purified human pituitary tissue.

    But because the amount of hGH contained in a single gland is extremely small, any single dose given to any one patient could contain material from around 16,000 donated glands.

    An average course of hGH treatment lasts around four years, so the chances of receiving contaminated material – even for a very rare condition such as CJD – became quite high for such people.

    hGH is now manufactured synthetically in a laboratory, rather than from human tissue. So this particular mode of CJD transmission is no longer a risk.

    Scientist in a lab
    Human growth hormone is now produced in a lab.
    National Cancer Institute/Unsplash

    What are the latest findings about Alzheimer’s disease?

    The Nature Medicine paper provides the first evidence that transmission of Alzheimer’s disease can occur via human-to-human transmission.

    The authors examined the outcomes of people who received donated hGH until 1985. They found five such recipients had developed early-onset Alzheimer’s disease.

    They considered other explanations for the findings but concluded donated hGH was the likely cause.

    Given Alzheimer’s disease is a much more common illness than CJD, the authors presume those who received donated hGH before 1985 may be at higher risk of developing Alzheimer’s disease.

    Alzheimer’s disease is caused by presence of two abnormally folded proteins: amyloid and tau. There is increasing evidence these proteins spread in the brain in a similar way to prion diseases. So the mode of transmission the authors propose is certainly plausible.

    However, given the amyloid protein deposits in the brain at least 20 years before clinical Alzheimer’s disease develops, there is likely to be a considerable time lag before cases that might arise from the receipt of donated hGH become evident.

    When was this process used in Australia?

    In Australia, donated pituitary material was used from 1967 to 1985 to treat people with short stature and infertility.

    More than 2,000 people received such treatment. Four developed CJD, the last case identified in 1991. All four cases were likely linked to a single contaminated batch.

    The risks of any other cases of CJD developing now in pituitary material recipients, so long after the occurrence of the last identified case in Australia, are considered to be incredibly small.

    Early-onset Alzheimer’s disease (defined as occurring before the age of 65) is uncommon, accounting for around 5% of all cases. Below the age of 50 it’s rare and likely to have a genetic contribution.

    Older man places his hands on his head
    Early onset Alzheimer’s means it occurs before age 65.
    perfectlab/Shutterstock

    The risk is very low – and you can’t ‘catch’ it like a virus

    The Nature Medicine paper identified five cases which were diagnosed in people aged 38 to 55. This is more than could be expected by chance, but still very low in comparison to the total number of patients treated worldwide.

    Although the long “incubation period” of Alzheimer’s disease may mean more similar cases may be identified in the future, the absolute risk remains very low. The main scientific interest of the article lies in the fact it’s first to demonstrate that Alzheimer’s disease can be transmitted from person to person in a similar way to prion diseases, rather than in any public health risk.

    The authors were keen to emphasise, as I will, that Alzheimer’s cannot be contracted via contact with or providing care to people with Alzheimer’s disease.The Conversation

    Steve Macfarlane, Head of Clinical Services, Dementia Support Australia, & Associate Professor of Psychiatry, Monash University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Share This Post

Related Posts

  • Shoulders Range – by Elia Bartolini
  • Long-acting contraceptives seem to be as safe as the pill when it comes to cancer risk

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    Many women worry hormonal contraceptives have dangerous side-effects including increased cancer risk. But this perception is often out of proportion with the actual risks.

    So, what does the research actually say about cancer risk for contraceptive users?

    And is your cancer risk different if, instead of the pill, you use long-acting reversible contraceptives? These include intrauterine devices or IUDs (such as Mirena), implants under the skin (such as Implanon), and injections (such as Depo Provera).

    Our new study, conducted by the University of Queensland and QIMR Berghofer Medical Research Institute and published by the Journal of the National Cancer Institute, looked at this question.

    We found long-acting contraceptives seem to be as safe as the pill when it comes to cancer risk (which is good news) but not necessarily any safer than the pill.

    Peakstock/Shutterstock
    A woman gets a hormonal birth control product implant
    Some hormonal contraceptives take the form of implants under the skin. WiP-Studio/Shutterstock

    How does the contraceptive pill affect cancer risk?

    The International Agency for Research on Cancer, which compiles evidence on cancer causes, has concluded that oral contraceptives have mixed effects on cancer risk.

    Using the oral contraceptive pill:

    • slightly increases your risk of breast and cervical cancer in the short term, but
    • substantially reduces your risk of cancers of the uterus and ovaries in the longer term.

    Our earlier work showed the pill was responsible for preventing far more cancers overall than it contributed to.

    In previous research we estimated that in 2010, oral contraceptive pill use prevented over 1,300 cases of endometrial and ovarian cancers in Australian women.

    It also prevented almost 500 deaths from these cancers in 2013. This is a reduction of around 25% in the deaths that could have occurred that year if women hadn’t taken the pill.

    In contrast, we calculated the pill may have contributed to around 15 deaths from breast cancer in 2013, which is less than 0.5% of all breast cancer deaths in that year.

    A woman pops contraceptive pills from a pill pack.
    Previous work showed the pill was responsible for preventing far more cancers overall than it contributed to. Image Point Fr

    What about long-acting reversible contraceptives and cancer risk?

    Long-acting reversible contraceptives – which include intrauterine devices or IUDs, implants under the skin, and injections – release progesterone-like hormones.

    These are very effective contraceptives that can last from a few months (injections) up to seven years (intrauterine devices).

    Notably, they don’t contain the hormone oestrogen, which may be responsible for some of the side-effects of the pill (including perhaps contributing to a higher risk of breast cancer).

    Use of these long-acting contraceptives has doubled over the past decade, while the use of the pill has declined. So it’s important to know whether this change could affect cancer risk for Australian women.

    Our new study of more than 1 million Australian women investigated whether long-acting, reversible contraceptives affect risk of invasive cancers. We compared the results to the oral contraceptive pill.

    We used de-identified health records for Australian women aged 55 and under in 2002.

    Among this group, about 176,000 were diagnosed with cancer between 2004 and 2013 when the oldest women were aged 67. We compared hormonal contraceptive use among these women who got cancer to women without cancer.

    We found that long-term users of all types of hormonal contraception had around a 70% lower risk of developing endometrial cancer in the years after use. In other words, the risk of developing endometrial cancer is substantially lower among women who took hormonal contraception compared to those who didn’t.

    For ovarian cancer, we saw a 50% reduced risk (compared to those who took no hormonal contraception) for women who were long-term users of the hormone-containing IUD.

    The risk reduction was not as marked for the implants or injections, however few long-term users of these products developed these cancers in our study.

    As the risk of endometrial and ovarian cancers increases with age, it will be important to look at cancer risk in these women as they get older.

    What about breast cancer risk?

    Our findings suggest that the risk of breast cancer for current users of long-acting contraceptives is similar to users of the pill.

    However, the contraceptive injection was only associated with an increase in breast cancer risk after five years of use and there was no longer a higher risk once women stopped using them.

    Our results suggested that the risk of breast cancer also reduces after stopping use of the contraceptive implants.

    We will need to follow-up the women for longer to determine whether this is also the case for the IUD.

    It is worth emphasising that the breast cancer risk associated with all hormonal contraceptives is very small.

    About 30 in every 100,000 women aged 20 to 39 years develop breast cancer each year, and any hormonal contraceptive use would only increase this to around 36 cases per 100,000.

    What about other cancers?

    Our study did not show any consistent relationships between contraceptive use and other cancers types. However, we only at looked at invasive cancers (meaning those that start at a primary site but have the potential to spread to other parts of the body).

    A recent French study found that prolonged use of the contraceptive injection increased the risk of meningioma (a type of benign brain tumour).

    However, meningiomas are rare, especially in young women. There are around two cases in every 100,000 in women aged 20–39, so the extra number of cases linked to contraceptive injection use was small.

    The French study found the hormonal IUD did not increase meningioma risk (and they did not investigate contraceptive implants).

    Benefits and side-effects

    There are benefits and side-effects for all medicines, including contraceptives, but it is important to know most very serious side-effects are rare.

    A conversation with your doctor about the balance of benefits and side-effects for you is always a good place to start.

    Susan Jordan, Professor of Epidemiology, The University of Queensland; Karen Tuesley, Postdoctoral Research Fellow, School of Public Health, The University of Queensland, and Penny Webb, Distinguished Scientist, Gynaecological Cancers Group, QIMR Berghofer Medical Research Institute

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Rewire Your OCD Brain – by Dr. Catherine Pittman & Dr. William Youngs

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    OCD is just as misrepresented in popular media as many other disorders, and in this case, it’s typically not “being a neat freak” or needing to alphabetize things, so much as having uncontrollable obsessive intrusive thoughts, and often in response to those, unwanted compulsions. This can come from unchecked spiralling anxiety, and/or PTSD, for example.

    What Drs. Pittman & Young offer is an applicable set of solutions, to literally rewire the brain (insofar as synapses can be considered neural wires). Leveraging neuroplasticity to work with us rather than against us, the authors talk us through picking apart the crossed wires, and putting them back in more helpful ways.

    This is not, by the way, a book of CBT, though it does touch on that too.

    Mostly, the book explains—clearly and simply and sometimes with illustrationswhat is going wrong for us neurologically, and how to neurologically change that.

    Bottom line: whether you have OCD or suffer from anxiety or just need help dealing with obsessive thoughts, this book can help a lot in, as the title suggests, rewiring that.

    Click here to check out Rewire Your OCD Brain, and banish obsessive thoughts!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:

  • Healthy Mind In A Healthy Body

    10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.

    The 8-minute piece of music “Weightless” by Marconi was created scientifically to lower the heart rate and relax the listener. How did they do it? You can read the British Academy of Sound Therapy’s explanation of the methodology here, but important results of the study were:

    • “Weightless” was able to induce greater relaxation levels than a massage (increase of 6%).
    • “Weightless” also induced an 11% increase in relaxation over all other relaxing music tracks in the study.
    • “Weightless” was also subjectively rated as more relaxing than any other music by all the participants.

    Try it for yourself!

    Click Here If The Embedded Video Doesn’t Load Automatically!

    Isn’t that better? Whenever you’re ready, read on…

    Today we’re going to share a technique for dealing with difficult emotions. The technique is used in Cognitive Behavioral Therapy (CBT), and Dialectical Behavior Therapy (DBT), and it’s called RAIN:

    • Recognizing: ask yourself “what is it that I’m feeling?”, and put a name to it. It could be anger, despair, fear, frustration, anxiety, overwhelm etc.
    • Accepting: “OK, so, I’m feeling ________”. There’s no point in denying it, or being defensive about it, these things won’t help you. For now, just accept it.
    • Investigating: “Why am I feeling ________?” Maybe there is an obvious reason, maybe you need to dig for a reason—or dig deeper for the real reason. Most bad feelings are driven by some sort of fear or insecurity, so that can be a good avenue for examination. Important: your feelings may be rational or irrational. That’s fine. This is a time for investigating, not judging.
    • Non-Identification: not making whatever it is you’re feeling into a part of you. Once you get too attached to “I am jealous”, “I am angry”, “I am sad” etc, it can be difficult to manage something that has become a part of your personality; you’ll defend your jealousy, anger, sadness etc rather than tackle it.

    As a CBT tool, this is something you can do for yourself at any time. It won’t magically solve your problems, but it can stop you from spiralling into a state of crisis, and get you back on a more useful track.

    As a DBT tool, to give this its full strength, ideally now you will communicate what you’re feeling, to somebody you trust, perhaps a partner or friend, for instance.

    Humans are fundamentally social creatures, and we achieve our greatest strengths when we support each other—and that also means sometimes seeking and accepting support!

    Do you have a good technique you’d like to share? Reply to this email and let us know!

    Don’t Forget…

    Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!

    Learn to Age Gracefully

    Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: