What does it mean to be transgender?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Transgender media coverage has surged in recent years for a wide range of reasons. While there are more transgender television characters than ever before, hundreds of bills are targeting transgender people’s access to medical care, sports teams, gender-specific public spaces, and other institutions.
Despite the increase in conversation about the transgender community, public confusion around transgender identity remains.
Read on to learn more about what it means to be transgender and understand challenges transgender people may face.
What does it mean to be transgender?
Transgender—or “trans”—is an umbrella term for people whose gender identity or gender expression does not conform to their sex assigned at birth. People can discover they are trans at any age.
Gender identity refers to a person’s inner sense of being a woman, a man, neither, both, or something else entirely. Trans people who don’t feel like women or men might describe themselves as nonbinary, agender, genderqueer, or two-spirit, among other terms.
Gender expression describes the way a person communicates their gender through their appearance—such as their clothing or hairstyle—and behavior.
A person whose gender expression doesn’t conform to the expectations of their assigned sex may not identify as trans. The only way to know for sure if someone is trans is if they tell you.
Cisgender—or “cis”—describes people whose gender identities match the sex they were assigned at birth.
How long have transgender people existed?
Being trans isn’t new. Although the word “transgender” only dates back to the 1960s, people whose identities defy traditional gender expectations have existed across cultures throughout recorded history.
How many people are transgender?
A 2022 Williams Institute study estimates that 1.6 million people over the age of 13 identify as transgender in the United States.
Is being transgender a mental health condition?
No. Conveying and communicating about your gender in a way that feels authentic to you is a normal and necessary part of self-expression.
Social and legal stigma, bullying, discrimination, harassment, negative media messages, and barriers to gender-affirming medical care can cause psychological distress for trans people. This is especially true for trans people of color, who face significantly higher rates of violence, poverty, housing instability, and incarceration—but trans identity itself is not a mental health condition.
What is gender dysphoria?
Gender dysphoria describes a feeling of unease that some trans people experience when their perceived gender doesn’t match their gender identity, or their internal sense of gender. A 2021 study of trans adults pursuing gender-affirming medical care found that most participants started experiencing gender dysphoria by the time they were 7.
When trans people don’t receive the support they need to manage gender dysphoria, they may experience depression, anxiety, social isolation, suicidal ideation, substance use disorder, eating disorders, and self-injury.
How do trans people manage gender dysphoria?
Every trans person’s experience with gender dysphoria is unique. Some trans people may alleviate dysphoria by wearing gender-affirming clothing or by asking others to refer to them by a new name and use pronouns that accurately reflect their gender identity. The 2022 U.S. Trans Survey found that nearly all trans participants who lived as a different gender than the sex they were assigned at birth reported that they were more satisfied with their lives.
Some trans people may also manage dysphoria by pursuing medical transition, which may involve taking hormones and getting gender-affirming surgery.
Access to gender-affirming medical care has been shown to reduce the risk of depression and suicide among trans youth and adults.
To learn more about the trans community, visit resources from the National Center for Transgender Equality, the Trevor Project, PFLAG, and Planned Parenthood.
If you or anyone you know is considering suicide or self-harm or is anxious, depressed, upset, or needs to talk, call the Suicide & Crisis Lifeline at 988 or text the Crisis Text Line at 741-741. For international resources, here is a good place to begin.
This article first appeared on Public Good News and is republished here under a Creative Commons license.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Recommended
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Goat Milk Greek Yogurt vs Almond Milk Greek Yogurt – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing goat milk yogurt to almond milk yogurt, we picked the almond milk yogurt.
Why?
Surprised? Honestly, we were too!
Much as we love almonds, we were fully expecting to write about how they’re very close in nutritional value, but the dairy yogurt has more probiotics, but no, as it turns out when we looked into them, they’re quite comparable in that regard.
It’s easy to assume “goat milk yogurt is more natural and therefore healthier”, but in both cases, it was a case of taking a fermentable milk, and fermenting it (an ancient process). “But almond milk is a newfangled thing”, well, new-ish…
So what was the deciding factor?
In this case, the almond milk yogurt has about twice the protein per (same size) serving, compared to the goat milk; all the other macros are about the same, and the micronutrients are similar. Like many plant-based milks and yogurts, this one is fortified with calcium and vitamin D, so that wasn’t an issue either.
In short: the only meaningful difference was the protein, and the almond came out on top.
However!
The almond came out on top only because it is strained; this can be done (or not) with any kind of yogurt, be it from an animal or a plant.
In other words: if it had been different brands, the goat milk yogurt could have come out on top!
The take-away idea here is: always read labels, because as you’ve just seen, even we can get surprised sometimes!
seriously if you only remember one thing from this today, make it the above
Other thing worth mentioning: yogurts, and dairy products in general, are often made with common allergens (e.g. dairy, nuts, soy, etc). So if you are allergic or intolerant, obviously don’t choose the one to which you are allergic or intolerant.
That said… If you are lactose-intolerant, but not allergic, goat’s milk does have less lactose than cow’s milk. But of course, you know your limits better than we can in this regard.
Want to try some?
Amazon is not coming up with the goods for this one (or anything even similar, at time of writing), so we recommend trying your local supermarket (and reading labels, because products vary widely!)
What you’re looking for (be it animal- or plant-based):
- Live culture probiotic bacteria
- No added sugar
- Minimal additives in general
- Lastly, check out the amounts for protein, calcium, vitamin D, etc.
Enjoy!
Share This Post
-
The Truth About Statins – by Barbara H. Roberts, M.D.
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
All too often, doctors looking to dispense a “quick fix” will prescribe from their playbook of a dozen or so “this will get you out of my office” drugs. Most commonly, things that treat symptoms rather than the cause. Sometimes, this can be fine! For example, in some cases, painkillers and antidepressants can make a big improvement to people’s lives. What about statins, though?
Prescribed to lower cholesterol, they broadly do exactly that. However…
Dr. Roberts wants us to know that we could be missing the big picture of heart health, and making a potentially fatal mistake.
This is not to say that the book argues that statins are necessarily terrible, or that they don’t have their place. Just, we need to understand what they will and won’t do, and make an informed choice.
To which end, she does advise regards when statins can help the most, and when they may not help at all. She also covers the questions to ask if your doctor wants to prescribe them. And—all so frequently overlooked—the important differences between men’s and women’s heart health, and the implications these have for the efficacy (or not) of statins.
With regard to the “alternatives to cholesterol-lowering drugs” promised in the subtitle… we won’t keep any secrets here:
Dr. Roberts (uncontroversially) recommends the Mediterranean diet. She also provides two weeks’ worth of recipes for such, in the final part of the book.
All in all, an important book to read if you or a loved one are taking, or thinking of taking, statins.
Pick up your copy of The Truth About Statins on Amazon today!
Share This Post
-
Vitamin C (Drinkable) vs Vitamin C (Chewable) – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing vitamin C (drinkable) to vitamin C (chewable), we picked the drinkable.
Why?
First let’s look at what’s more or less the same in each:
- The usable vitamin C content is comparable
- The bioavailability is comparable
- The additives to hold it together are comparable
So what’s the difference?
With the drinkable, you also drink a glass of water
If you’d like to read more about how to get the most out of the vitamins you take, you can do so here:
Are You Wasting Your Vitamins? Maybe, But You Don’t Have To
If you’d like to get some of the drinkable vitamin C, here’s an example product on Amazon
Enjoy!
Share This Post
Related Posts
-
How do science journalists decide whether a psychology study is worth covering?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Complex research papers and data flood academic journals daily, and science journalists play a pivotal role in disseminating that information to the public. This can be a daunting task, requiring a keen understanding of the subject matter and the ability to translate dense academic language into narratives that resonate with the general public.
Several resources and tip sheets, including the Know Your Research section here at The Journalist’s Resource, aim to help journalists hone their skills in reporting on academic research.
But what factors do science journalists look for to decide whether a social science research study is trustworthy and newsworthy? That’s the question researchers at the University of California, Davis, and the University of Melbourne in Australia examine in a recent study, “How Do Science Journalists Evaluate Psychology Research?” published in September in Advances in Methods and Practices in Psychological Science.
Their online survey of 181 mostly U.S.-based science journalists looked at how and whether they were influenced by four factors in fictitious research summaries: the sample size (number of participants in the study), sample representativeness (whether the participants in the study were from a convenience sample or a more representative sample), the statistical significance level of the result (just barely statistically significant or well below the significance threshold), and the prestige of a researcher’s university.
The researchers found that sample size was the only factor that had a robust influence on journalists’ ratings of how trustworthy and newsworthy a study finding was.
University prestige had no effect, while the effects of sample representativeness and statistical significance were inconclusive.
But there’s nuance to the findings, the authors note.
“I don’t want people to think that science journalists aren’t paying attention to other things, and are only paying attention to sample size,” says Julia Bottesini, an independent researcher, a recent Ph.D. graduate from the Psychology Department at UC Davis, and the first author of the study.
Overall, the results show that “these journalists are doing a very decent job” vetting research findings, Bottesini says.
Also, the findings from the study are not generalizable to all science journalists or other fields of research, the authors note.
“Instead, our conclusions should be circumscribed to U.S.-based science journalists who are at least somewhat familiar with the statistical and replication challenges facing science,” they write. (Over the past decade a series of projects have found that the results of many studies in psychology and other fields can’t be reproduced, leading to what has been called a ‘replication crisis.’)
“This [study] is just one tiny brick in the wall and I hope other people get excited about this topic and do more research on it,” Bottesini says.
More on the study’s findings
The study’s findings can be useful for researchers who want to better understand how science journalists read their research and what kind of intervention — such as teaching journalists about statistics — can help journalists better understand research papers.
“As an academic, I take away the idea that journalists are a great population to try to study because they’re doing something really important and it’s important to know more about what they’re doing,” says Ellen Peters, director of Center for Science Communication Research at the School of Journalism and Communication at the University of Oregon. Peters, who was not involved in the study, is also a psychologist who studies human judgment and decision-making.
Peters says the study was “overall terrific.” She adds that understanding how journalists do their work “is an incredibly important thing to do because journalists are who reach the majority of the U.S. with science news, so understanding how they’re reading some of our scientific studies and then choosing whether to write about them or not is important.”
The study, conducted between December 2020 and March 2021, is based on an online survey of journalists who said they at least sometimes covered science or other topics related to health, medicine, psychology, social sciences, or well-being. They were offered a $25 Amazon gift card as compensation.
Among the participants, 77% were women, 19% were men, 3% were nonbinary and 1% preferred not to say. About 62% said they had studied physical or natural sciences at the undergraduate level, and 24% at the graduate level. Also, 48% reported having a journalism degree. The study did not include the journalists’ news reporting experience level.
Participants were recruited through the professional network of Christie Aschwanden, an independent journalist and consultant on the study, which could be a source of bias, the authors note.
“Although the size of the sample we obtained (N = 181) suggests we were able to collect a range of perspectives, we suspect this sample is biased by an ‘Aschwanden effect’: that science journalists in the same professional network as C. Aschwanden will be more familiar with issues related to the replication crisis in psychology and subsequent methodological reform, a topic C. Aschwanden has covered extensively in her work,” they write.
Participants were randomly presented with eight of 22 one-paragraph fictitious social and personality psychology research summaries with fictitious authors. The summaries are posted on Open Science Framework, a free and open-source project management tool for researchers by the Center for Open Science, with a mission to increase openness, integrity and reproducibility of research.
For instance, one of the vignettes reads:
“Scientists at Harvard University announced today the results of a study exploring whether introspection can improve cooperation. 550 undergraduates at the university were randomly assigned to either do a breathing exercise or reflect on a series of questions designed to promote introspective thoughts for 5 minutes. Participants then engaged in a cooperative decision-making game, where cooperation resulted in better outcomes. People who spent time on introspection performed significantly better at these cooperative games (t (548) = 3.21, p = 0.001). ‘Introspection seems to promote better cooperation between people,’ says Dr. Quinn, the lead author on the paper.”
In addition to answering multiple-choice survey questions, participants were given the opportunity to answer open-ended questions, such as “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?”
Bottesini says those responses illuminated how science journalists analyze a research study. Participants often mentioned the prestige of the journal in which it was published or whether the study had been peer-reviewed. Many also seemed to value experimental research designs over observational studies.
Considering statistical significance
When it came to considering p-values, “some answers suggested that journalists do take statistical significance into account, but only very few included explanations that suggested they made any distinction between higher or lower p values; instead, most mentions of p values suggest journalists focused on whether the key result was statistically significant,” the authors write.
Also, many participants mentioned that it was very important to talk to outside experts or researchers in the same field to get a better understanding of the finding and whether it could be trusted, the authors write.
“Journalists also expressed that it was important to understand who funded the study and whether the researchers or funders had any conflicts of interest,” they write.
Participants also “indicated that making claims that were calibrated to the evidence was also important and expressed misgivings about studies for which the conclusions do not follow from the evidence,” the authors write.
In response to the open-ended question, “What characteristics do you [typically] consider when evaluating the trustworthiness of a scientific finding?” some journalists wrote they checked whether the study was overstating conclusions or claims. Below are some of their written responses:
- “Is the researcher adamant that this study of 40 college kids is representative? If so, that’s a red flag.”
- “Whether authors make sweeping generalizations based on the study or take a more measured approach to sharing and promoting it.”
- “Another major point for me is how ‘certain’ the scientists appear to be when commenting on their findings. If a researcher makes claims which I consider to be over-the-top about the validity or impact of their findings, I often won’t cover.”
- “I also look at the difference between what an experiment actually shows versus the conclusion researchers draw from it — if there’s a big gap, that’s a huge red flag.”
Peters says the study’s findings show that “not only are journalists smart, but they have also gone out of their way to get educated about things that should matter.”
What other research shows about science journalists
A 2023 study, published in the International Journal of Communication, based on an online survey of 82 U.S. science journalists, aims to understand what they know and think about open-access research, including peer-reviewed journals and articles that don’t have a paywall, and preprints. Data was collected between October 2021 and February 2022. Preprints are scientific studies that have yet to be peer-reviewed and are shared on open repositories such as medRxiv and bioRxiv. The study finds that its respondents “are aware of OA and related issues and make conscious decisions around which OA scholarly articles they use as sources.”
A 2021 study, published in the Journal of Science Communication, looks at the impact of the COVID-19 pandemic on the work of science journalists. Based on an online survey of 633 science journalists from 77 countries, it finds that the pandemic somewhat brought scientists and science journalists closer together. “For most respondents, scientists were more available and more talkative,” the authors write. The pandemic has also provided an opportunity to explain the scientific process to the public, and remind them that “science is not a finished enterprise,” the authors write.
More than a decade ago, a 2008 study, published in PLOS Medicine, and based on an analysis of 500 health news stories, found that “journalists usually fail to discuss costs, the quality of the evidence, the existence of alternative options, and the absolute magnitude of potential benefits and harms,” when reporting on research studies. Giving time to journalists to research and understand the studies, giving them space for publication and broadcasting of the stories, and training them in understanding academic research are some of the solutions to fill the gaps, writes Gary Schwitzer, the study author.
Advice for journalists
We asked Bottesini, Peters, Aschwanden and Tamar Wilner, a postdoctoral fellow at the University of Texas, who was not involved in the study, to share advice for journalists who cover research studies. Wilner is conducting a study on how journalism research informs the practice of journalism. Here are their tips:
1. Examine the study before reporting it.
Does the study claim match the evidence? “One thing that makes me trust the paper more is if their interpretation of the findings is very calibrated to the kind of evidence that they have,” says Bottesini. In other words, if the study makes a claim in its results that’s far-fetched, the authors should present a lot of evidence to back that claim.
Not all surprising results are newsworthy. If you come across a surprising finding from a single study, Peters advises you to step back and remember Carl Sagan’s quote: “Extraordinary claims require extraordinary evidence.”
How transparent are the authors about their data? For instance, are the authors posting information such as their data and the computer codes they use to analyze the data on platforms such as Open Science Framework, AsPredicted, or The Dataverse Project? Some researchers ‘preregister’ their studies, which means they share how they’re planning to analyze the data before they see them. “Transparency doesn’t automatically mean that a study is trustworthy,” but it gives others the chance to double-check the findings, Bottesini says.
Look at the study design. Is it an experimental study or an observational study? Observational studies can show correlations but not causation.
“Observational studies can be very important for suggesting hypotheses and pointing us towards relationships and associations,” Aschwanden says.
Experimental studies can provide stronger evidence toward a cause, but journalists must still be cautious when reporting the results, she advises. “If we end up implying causality, then once it’s published and people see it, it can really take hold,” she says.
Know the difference between preprints and peer-reviewed, published studies. Peer-reviewed papers tend to be of higher quality than those that are not peer-reviewed. Read our tip sheet on the difference between preprints and journal articles.
Beware of predatory journals. Predatory journals are journals that “claim to be legitimate scholarly journals, but misrepresent their publishing practices,” according to a 2020 journal article, published in the journal Toxicologic Pathology, “Predatory Journals: What They Are and How to Avoid Them.”
2. Zoom in on data.
Read the methods section of the study. The methods section of the study usually appears after the introduction and background section. “To me, the methods section is almost the most important part of any scientific paper,” says Aschwanden. “It’s amazing to me how often you read the design and the methods section, and anyone can see that it’s a flawed design. So just giving things a gut-level check can be really important.”
What’s the sample size? Not all good studies have large numbers of participants but pay attention to the claims a study makes with a small sample size. “If you have a small sample, you calibrate your claims to the things you can tell about those people and don’t make big claims based on a little bit of evidence,” says Bottesini.
But also remember that factors such as sample size and p-value are not “as clear cut as some journalists might assume,” says Wilner.
How representative of a population is the study sample? “If the study has a non-representative sample of, say, undergraduate students, and they’re making claims about the general population, that’s kind of a red flag,” says Bottesini. Aschwanden points to the acronym WEIRD, which stands for “Western, Educated, Industrialized, Rich, and Democratic,” and is used to highlight a lack of diversity in a sample. Studies based on such samples may not be generalizable to the entire population, she says.
Look at the p-value. Statistical significance is both confusing and controversial, but it’s important to consider. Read our tip sheet, “5 Things Journalists Need to Know About Statistical Significance,” to better understand it.
3. Talk to scientists not involved in the study.
If you’re not sure about the quality of a study, ask for help. “Talk to someone who is an expert in study design or statistics to make sure that [the study authors] use the appropriate statistics and that methods they use are appropriate because it’s amazing to me how often they’re not,” says Aschwanden.
Get an opinion from an outside expert. It’s always a good idea to present the study to other researchers in the field, who have no conflicts of interest and are not involved in the research you’re covering and get their opinion. “Don’t take scientists at their word. Look into it. Ask other scientists, preferably the ones who don’t have a conflict of interest with the research,” says Bottesini.
4. Remember that a single study is simply one piece of a growing body of evidence.
“I have a general rule that a single study doesn’t tell us very much; it just gives us proof of concept,” says Peters. “It gives us interesting ideas. It should be retested. We need an accumulation of evidence.”
Aschwanden says as a practice, she tries to avoid reporting stories about individual studies, with some exceptions such as very large, randomized controlled studies that have been underway for a long time and have a large number of participants. “I don’t want to say you never want to write a single-study story, but it always needs to be placed in the context of the rest of the evidence that we have available,” she says.
Wilner advises journalists to spend some time looking at the scope of research on the study’s specific topic and learn how it has been written about and studied up to that point.
“We would want science journalists to be reporting balance of evidence, and not focusing unduly on the findings that are just in front of them in a most recent study,” Wilner says. “And that’s a very difficult thing to as journalists to do because they’re being asked to make their article very newsy, so it’s a difficult balancing act, but we can try and push journalists to do more of that.”
5. Remind readers that science is always changing.
“Science is always two steps forward, one step back,” says Peters. Give the public a notion of uncertainty, she advises. “This is what we know today. It may change tomorrow, but this is the best science that we know of today.”
Aschwanden echoes the sentiment. “All scientific results are provisional, and we need to keep that in mind,” she says. “It doesn’t mean that we can’t know anything, but it’s very important that we don’t overstate things.”
Authors of a study published in PNAS in January analyzed more than 14,000 psychology papers and found that replication success rates differ widely by psychology subfields. That study also found that papers that could not be replicated received more initial press coverage than those that could.
The authors note that the media “plays a significant role in creating the public’s image of science and democratizing knowledge, but it is often incentivized to report on counterintuitive and eye-catching results.”
Ideally, the news media would have a positive relationship with replication success rates in psychology, the authors of the PNAS study write. “Contrary to this ideal, however, we found a negative association between media coverage of a paper and the paper’s likelihood of replication success,” they write. “Therefore, deciding a paper’s merit based on its media coverage is unwise. It would be valuable for the media to remind the audience that new and novel scientific results are only food for thought before future replication confirms their robustness.”
Additional reading
Uncovering the Research Behaviors of Reporters: A Conceptual Framework for Information Literacy in Journalism
Katerine E. Boss, et al. Journalism & Mass Communication Educator, October 2022.The Problem with Psychological Research in the Media
Steven Stosny. Psychology Today, September 2022.Critically Evaluating Claims
Megha Satyanarayana, The Open Notebook, January 2022.How Should Journalists Report a Scientific Study?
Charles Binkley and Subramaniam Vincent. Markkula Center for Applied Ethics at Santa Clara University, September 2020.What Journalists Get Wrong About Social Science: Full Responses
Brian Resnick. Vox, January 2016.From The Journalist’s Resource
8 Ways Journalists Can Access Academic Research for Free
5 Things Journalists Need to Know About Statistical Significance
5 Common Research Designs: A Quick Primer for Journalists
5 Tips for Using PubPeer to Investigate Scientific Research Errors and Misconduct
What’s Standard Deviation? 4 Things Journalists Need to Know
This article first appeared on The Journalist’s Resource and is republished here under a Creative Commons license.
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
PFAS Exposure & Cancer: The Numbers Are High
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
PFAS & Cancer Risk: The Numbers Are High
This is Dr. Maaike van Gerwen. Is that an MD or a PhD, you wonder? It’s both.
She’s also Director of Research in the Department of Otolaryngology at Mount Sinai Hospital in New York, Scientific Director of the Program of Personalized Management of Thyroid Disease, and Member of the Institute for Translational Epidemiology and the Transdisciplinary Center on Early Environmental Exposures.
What does she want us to know?
She’d love for us to know about her latest research published literally today, about the risks associated with PFAS, such as the kind widely found in non-stick cookware:
Per- and polyfluoroalkyl substances (PFAS) exposure and thyroid cancer risk
Dr. van Gerwen and her team tested this several ways, and the very short and simple version of the findings is that per doubling of exposure, there was a 56% increased rate of thyroid cancer diagnosis.
(The rate of exposure was not just guessed based on self-reports; it was measured directly from PFAS levels in the blood of participants)
- PFAS exposure can come from many sources, not just non-stick cookware, but that’s a “biggie” since it transfers directly into food that we consume.
- Same goes for widely-available microwaveable plastic food containers.
- Relatively less dangerous exposures include waterproofed clothing.
To keep it simple and look at the non-stick pans and microwavable plastic containers, doubling exposure might mean using such things every day vs every second day.
Practical take-away: PFAS may be impossible to avoid completely, but even just cutting down on the use of such products is already reducing your cancer risk.
Isn’t it too late, by this point in life? Aren’t they “forever chemicals”?
They’re not truly “forever”, but they do have long half-lives, yes.
See: Can we take the “forever” out of forever chemicals?
The half-lives of PFOS and PFOA in water are 41 years and 92 years, respectively.
In the body, however, because our body is constantly trying to repair itself and eliminate toxins, it’s more like 3–7 years.
That might seem like a long time, and perhaps it is, but the time will pass anyway, so might as well get started now, rather than in 3–7 years time!
Read more: National Academies Report Calls for Testing People With High Exposure to “Forever Chemicals”
What should we use instead?
In place of non-stick cookware, cast iron is fantastic. It’s not everyone’s preference, though, so you might also like to know that ceramic cookware is a fine option that’s functionally non-stick but without needing a non-stick coating. Check for PFAS-free status; they should advertise this.
In place of plastic microwaveable containers, Pyrex (or equivalent) glass dishes (you can get them with lids) are a top-tier option. Ceramic containers (without metallic bits!) are also safely microwaveable.
See also:
Here’s a List of Products with PFAS (& How to Avoid Them)
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails:
-
Broccoli vs Cabbage – Which is Healthier?
10almonds is reader-supported. We may, at no cost to you, receive a portion of sales if you purchase a product through a link in this article.
Our Verdict
When comparing broccoli to cabbage, we picked the broccoli.
Why?
Here we go once again pitting two different cultivars of the same species (Brassica oleracea) against each other, and/but once again, there is one that comes out as nutritionally best.
In terms of macros, broccoli has more protein, carbs, and fiber, while they are both low glycemic index foods. The differences are small though, so it’s fairest to call this category a tie.
When it comes to vitamins, broccoli has more of vitamins A, B1, B2, B3, B5, B6, B7, B9, C, E, K, and choline, while cabbage is not higher in any vitamins. It should be noted that cabbage is still good for these, especially vitamins C and K, but broccoli is simply better.
In the category of minerals, broccoli has more calcium, copper, iron, magnesium, manganese, phosphorus, potassium, selenium, and zinc, while cabbage is not higher in any minerals. Again though, cabbage is still good, especially in calcium, iron, and manganese, but again, broccoli is simply better.
Of course, enjoy either or both! But if you want the nutritionally densest option, it’s broccoli.
Want to learn more?
You might like to read:
What’s Your Plant Diversity Score?
Take care!
Don’t Forget…
Did you arrive here from our newsletter? Don’t forget to return to the email to continue learning!
Learn to Age Gracefully
Join the 98k+ American women taking control of their health & aging with our 100% free (and fun!) daily emails: